Giving the Law a “Like”? Matthias Kettemann’s Comments on Austria’s draft Communications Platforms Law

October 19, 2020 in Opinion

Giving the Law a “Like”? Matthias Kettemann’s Comments on Austria’s draft Communications Platforms Law

October 10, 2020 in Opinion

Portrait Kettemann (c) Universität Graz

Matthias C. Kettemann *

Hatred on the net has reached unbearable proportions. With the Platform Act, the legislator is doing many things right. But the “milestone” for more online protection still has a few edges.

Germany has one, France wanted one, Turkey and Brazil are working on one right now – and Austria has pushed forward with one: it is all about a law that makes platforms more responsible for online hatred and imposes transparency obligations on them. The draft of the Communication Platforms Act (KoPl-G) was notified to the Commission this week.

First of all: With regard to victim protection, the legislative package against hate on the net is exactly what Minister for Women’s Affairs Susanne Raab (ÖVP) calls it: a milestone. The protection of victims from threats and degradation is of great importance. This is achieved by better protection against hate speech, aggravation of the offence of incitement to hatred, faster defense against cyberbullying and the prohibition of “upskirting”. However, the Communication Platforms Act (KoPl-G) still has certain edges.

What looks well 

The present draft is legally well done. Even the name alone is better than the German Network Enforcement Act, which seems strange at least in its short title (which network should be enforced?). In the impact assessment, the government also demonstrates a shaken measure of humility and makes it clear that a European solution against hate on the net and the greater involvement of platforms would have been better, but that takes time, so national measures would have to be taken.

The government has learned from the German NetzDG. The draft takes the good parts of the NetzDG (national authorized recipient; transparency reports) with it, avoids its gaps (put-back claim) and is on a firmer human rights footing than the overshooting French Loi Avia.

What is good is that there is no obligation to use clear names and that platforms do not have to keep user registers. There is also no federal database for illegal content, as provided for in the German revision of the NetzDG. The reporting deadlines correspond to those in the NetzDG and are sensible; the transparency obligations are not particularly detailed, but are essentially just as correct. According to Internet expert Ben Wagner of the Vienna University of Economics and Business Administration,  the possibility of accessing payments from advertising customers when platforms default on appointing an authorized recipient is “charming”. Another good example is §9 (3) KoPl-G, which explicitly excludes the general search obligation (“general monitoring”), which is prohibited under European law anyway.

Legal protection

The means of legal protection are important: If users are dissatisfied with the reporting procedure or the review process within the platform, they can turn to a complaints body through a conciliation procedure (established with RTR), which will then propose an amicable solution. KommAustria is the supervisory authority. Human rights expert Gregor Fischer from the University of Graz asks “whether the 240,000 euros earmarked for this purpose will be enough.”

Nevertheless, the conciliation procedures, if well designed, can have a kind of mini-oversight board function, where the complaints office sets up content standards. To do this, however, it must urgently enter into consultations with as broad a circle of the Austrian network community as possible. The Internet Governance Forum, which the University of Graz is organizing this fall, would be a good first place to start. Parallel to this, civil law (judicial deletion via dunning procedure using an online form at the district courts) and criminal law ways of legal protection against online hate are being simplified (elimination of the cost risk of acquittals after private prosecution offences, judicial investigation of suspects), so that the package does indeed amend Austrian “Internet law” in its entirety.

The fact that, as Amnesty Austria criticizes, review and complaint procedures are initially located on the platforms is difficult to solve otherwise without building up a massive state parallel justice system for online content. Therefore, private actors must also – first of all – decide which content stays online. However – and this is important – they do not make the final decision on the legality of this content, but only whether they consider it legal or in conformity with the General Terms and Conditions.

What can we do better?

There is still a fly in the platform regulation ointment: When Minister of Justice Alma Zadić says that it has been made clear “that the Internet is not a lawless space”, it sounds a bit like the reference of German Chancellor Merkel to the Internet as “Neuland”, “uncharted territory”. This is not a good narrative, even if the platforms in particular appear, if not as lawless zones, then at least as zones with limited (and time-consuming) modalities of legal protection, as has not only become apparent since online hate campaigns against (especially) women politicians.

Rather too robust is the automatism that after five “well-founded” complaints procedures, the supervisory authority should already take action. Here it would probably be wiser to increase the number. The law gives KommAustria substantial powers anyway, even though the law initially provides for a mandate to improve the platform. In Germany, the platforms have repeatedly tried to discuss the optimal design of complaints procedures with the responsible Federal Office of Justice, but the latter could only take action in the form of “So nicht” decisions. Here KommAustria can develop best practice models for the optimal design of moderation procedures, for example with regard to the Santa Clara Principles, after appropriate public consultation with all relevant stakeholders. It is also prudent to note that in the course of the appeal procedure, RTR will not take sovereign action for the time being.

It also needs to be clarified which platforms exactly are covered. Why should forums of online games become reportable, but not the commentary forums of newspapers? The latter seem to be of greater relevance for public discourse. Epicenter.works also points out that the exception for Wikipedia overlooks other projects like Wikicommons and Wikidata.

More light  

We need even more transparency: in addition to general statements on efforts to combat illegal content, the reporting obligation should at least include a summary of the changes made to the moderation rules during the period under review and a list of the automated moderation tools used, as well as (with a view to the right of explanation in Art. 22 GDPR) their central selection and prioritization logic. This is exactly what is being debated in Brazil.

In Germany, Facebook regularly has very low deletion figures according to NetzDG, because a lot of content is deleted according to community standards. Due to the rather hidden design of the message according to NetzDG, the Federal Office of Justice, which regulates in Germany, has also imposed a penalty. It should therefore be made clear in the draft that the reporting and transparency obligations should also extend to content that is not formally deleted “in accordance with KoPl-G”, as this would otherwise provide a loophole for the platforms.

The danger of overblocking is counterbalanced by a clear put-back claim. Empirical proof that such a thing threatens could not be furnished in Germany so far. However, this is also due to the economical data transfer of the platforms – and the KoPl-G could make some improvements here. The supervisory authority should, in the more detailed provisions on the reporting obligation, instruct the platforms to make not only comparable reports but also disaggregated data available to the public (and to science!) while respecting privacy.

Why it actually works

A basic problem of content moderation, by no means only on Facebook, cannot be solved by even the best law. The actual main responsibility lies with the platforms themselves: They set the rules, they design the automated tools, they delete and flag their employees. Currently, all major platforms follow the approach of leaving as many expressions of opinion online as possible, deleting only the worst postings (e.g. death threats) and adding counter-statements (e.g. warnings) to problematic speech (e.g. disinformation, racism). Covid-19 has only gradually changed this. This is based on a maximization of freedom of expression, which is now unacceptable, at the expense of other important legal interests, such as the protection of the rights of others and social cohesion. The assumption implied by platforms that all forms of speech are in principle to be seen as a positive contribution to diversity of opinion is simply no longer true under today’s communication conditions of the Internet. Today, platforms wander between serious self-criticism (“We can do better”) and hypocritical self-congratulation, they establish internal quasi-judges (Facebook), content rules advisory boards (TikTok) and transparency centers (Twitter), but then erratically delete only the worst content and – especially in the USA – get into ideologically charged controversies. 

For too long have we, as a society, accepted that platforms have no finality beyond profit maximization. There is another way, as Facebook has just shown: A posting published on the company’s internal platform pointed out the “victimization of well-intentioned policemen” by society, ignoring the black population’s systemic experience of violence. The posting led to emotionalized debates. According to the community standards of “normal” Facebook, the statement would have been unobjectionable. However, Mark Zuckerberg found it to be a problem for the conversational and corporate culture within Facebook: He commented that “systemic racism is real” and pointed out to his co-workers that “controversial topics” could only be debated at “specific forums” within the corporate Facebook. These sub-forums should then also receive “clear rules and robust moderation”. So it works after all.

However, as long as this awareness of the problems of platforms does not shape corporate policy and moderation practice across the board, laws such as the KoPl-G are necessary. The Commission now has three months to react. The Austrian authorities need to “stand still” until 2 December.

A marginal detail: The additional costs for the regulatory authority are to be provided by the state from the broadcasting fee: That is naturally simple, but one could also think about tightening the fiscal screws set for platforms.


– For more comments on the law, see the comments by the author, Gregor Fischer and Felicitas Rachinger, to the official review process at the Austrian Parliament.

* PD Mag. Dr. Matthias C. Kettemann, LL.M. (Harvard) is an internet legal expert at the Leibniz Institute for Media Research | Hans-Bredow-Institut (Hamburg), research group leader at the Sustainable Computing Lab of the Vienna University of Economics and Business Administration and lecturer at the University of Graz. In 2019 he advised the German Bundestag on the amendment of the Network Enforcement Act. In 2020 he published “The Normative Order of the Internet” with Oxford University Press.

Alexa as a psychotherapist

May 21, 2019 in Opinion

Members of the Privacy and Sustainable Computing Lab are teaching a variety of courses at the Vienna University of Economics and Business. In some cases, our students produce extraordinary work that stand on its own. Today’s blogpost presents what started as a class assignment by Zsófia Colombo, as part of a Master seminar on Sustainable Information Systems taught by Prof. Spiekermann-Hoff and Esther Görnemann last winter. Based on a thorough introduction on the architecture and functions of smart speakers and voice assistants, students used scenario building techniques to imagine a potential future of this technology, carefully selecting unpredictable influential factors and delicately balancing their interplay in the long run. The results of this specific assignment went far beyond what we expected.

Zsófia Colombo on Alexa’s qualities as a psychotherapist

Smart voice assistance systems like Alexa are a new trend and used in many homes all over the world. Such a system lets the user access many different functions via voice control: it is possible to make calls, to control the lights and temperature, to put on music or order things online. Alexa is also able to learn about its users – their voice patterns, their preferences and behavior. It has in addition many functions that can be customized, according to the users’ needs and preferences. If Alexa is entrusted with the deepest thoughts of its users, is it important to consider if the algorithm running the machine has the users’ best interest at heart? What consequences can such a scenario have? Zsófia asked just these questions and made a video trying to answer them. She created three different scenarios involving users who seek out Alexa’s help to fix their mental issues, whereby Alexa provides a proper diagnosis and gives them good advice.

Alexa as a grief therapist  

Alexa would guide the user through the various stages of grief by asking her questions and talk to her about her feelings. Even though Alexa would turn off the Grief Therapy Function in the end, the user might be accustomed to the presence of her to such an extent that she might neglect her real friends and lose the ability to connect with them. She might additionally develop serious health issues due to the consumption of takeout food and lack of exercise. Additionally, the personal information the user provided influences the product placement in her favorite TV show without her knowledge or consent. As soon as she finds out, she would experience a negative moment of truth, which could result in her not using Alexa anymore.

Alexa supports a user through the five stages of grief without a friend or a therapist by her side. Alexa can learn about the stages of grief by means of the activation of the “Grief therapist function”. Additionally, Alexa can offer help if she notices irregularities, for example sadness in the voice commands, the disappearance of a user, or changes in shopping habits or on social media. Alexa might react to that by asking the user what she is grateful for today, to put on happy music or her favorite TV shows. She might as well notify her friends or loved ones, if the user has not left the house in days. Alexa would have that information by checking the users’ location or the front door motion sensor. She would additionally set up an automatic shopping order to take care of food and basic needs. Alexa would guide the user through the various stages of grief by asking her questions and talk to her about her feelings. Even though Alexa would turn off the Grief Therapy Function in the end, the user might be accustomed to the presence of her to such an extent that she might neglect her real friends and lose the ability to connect with them. She might additionally develop serious health issues due to the consumption of takeout food and lack of exercise. Additionally, the personal information the user provided influences the product placement in her favorite TV show without her knowledge or consent. As soon as she finds out, she would experience a negative moment of truth, which could result in her not using Alexa anymore.

Alexa as a couple therapist 
One of the partners cheated and the couple is trying to heal their relationship with the help of the “Therapy Function”. That means taking couple’s therapy with Alexa twice a week. She additionally subscribes them to a meditation app and plans a date night for them. What happens to the data they shared about their intimate relationship? There is no definite answer to the question if the therapist-patient privilege also applies to this kind of relationship. Alexa would use the data for restaurant recommendations, whereby these restaurants would pay a commission. Increasingly, the couple could lose the ability to make decisions on their own. Additionally, they could get themselves in financial difficulties by letting Alexa book and prepay everything. This could lead to Alexa offering them a loan from Amazon, leading to a negative moment of truth, which could lead the couple to stop using Alexa altogether.

Alexa treats social media addiction      
The third example is the story of a student who uses Alexa to help with her social media addiction. Alexa could either notice on her own by using an app that measures how much the student uses Social Media or by means of a certain voice command like “Alexa, help with Social Media”. Alexa could subsequently help by asking the right questions and putting things into perspective. The student would experience a positive moment of truth and realize that she can stop her destructive behavior.

Overall, the relationship between the user and Alexa may increase in intimacy over time, which does raise concerns. The question remains, if it is healthy to consider Alexa as a therapist. Especially as companies who are willing to pay Amazon, can profit from the personal data provided by the user in a vulnerable position. The companies can use the data to manipulate users to consume their products. This seems especially questionable regarding users with mental health issues, who might have difficulties protecting themselves.

You can watch the full video about the three scenarios here:

Sarah Spiekermann: Who looks after the Ethics of AI? On the Role of the Regulators and Standards

January 31, 2019 in Opinion

A lot of attention is paid these days to whether and how AI should be regulated to ensure its ethics.  A High-Level-Expert Group in Brussels has started the usual European “stakeholder-carousel” and called for input to their first ideas.

But in fact, AI is a highly technical matter and when it comes to technical standardization, ethics is a relatively new field. Technology standardization is traditionally dealing with protocols, with hardware specifications, etc.. The fuzzy domain of ethics as well as the context-sensitivity of any ethical matter seems almost contrary to the straight und homogenous logic of the engineering world.

A first question in this challenge is therefore to ask what ethics is in the first place. While the philosophical world has worked over 2000 years on this question, ethics means – in a nutshell –  to do the right thing in the right way and to be a good person in doing so. In other words: to act – ideally as an exemplary role model – such that you or your company contributes to the good of society. To create positive externalities and avoid negative externalities. To create wellbeing in this world and combat the opposite.

Not everyone is equally good at acting ethically. Most humans learn pretty much to stay in the green area (see figure 1). This is what we learn in childhood, as part of our upbringing or by growing up as a member of society. In contrast to this good or ethical behaviour, there is also bad behaviour or what the law recognizes as criminal behaviour. Fairy tales call it “evil”. Between these two extremes between good and bad behaviour, between good and evil, there is some kind of borderline behaviour; a behaviour Germans would call „grenzwertig“ or „marginal“. The law is demarcating the line where this marginal behaviour isn‘t acceptable any more; where a practice is so bad that it is not legitimate any more; an extreme point where the rights of the people, nature or society are undermined by actors in such a way that these should be sanctioned to ensure the long-term stability of society.

From my perspective any technology, including AI, can be and should be build such that it fosters ethical behaviour in humans and human groups and does not cause any harms. Technology can support ethical behaviour. And – most importantly – it can be built with an ethical spirit. The latter is supporting the former.  What cannot be excluded prior to technological deployment is that borderline (or even criminal ) behaviour is accidentally triggered by a new technology or by the people using it. This happened to Microsoft when their AI became a fascist. Technology should therefore be iteratively improved, so that potential borderline effects are subsequently improved within it. In this vision there is no need for regulation. Companies and engineers can do the job; constantly working towards the good in their artefacts.

But here is a challenge: Ethical standards vary between world regions. Europe has different standards when it comes to safety, green IT/emission levels, privacy, etc.  There are completely different ethical standards when it comes to freedom and liberty when we compare Europe with China. There are completely different gender models when Russia and the Middle East are compared to Europe or the US. Ethics is always concerned with what is good for communities in a region. But technology is global these days and built to scale in international markets.  So I guess technical de facto standards that are rolled out worldwide often unwillingly and easily hit the borderline as soon as they spread across the world.

Regional legislators then have to look into this borderline behaviour of foreign technology to protect relevant values of their own society. This is what happened in the case of privacy, where the GDPR now protects Europe’s civil value standard. The GDPR shows that the legislator has a role to play when technologies cross borders.

And here is another challenge: Unfortunately these days – lets be realistic – not all companies are angels. „Firms of endearment“ do exist, but there are quite a few as well who play the game of borderline ethical/legal behaviour (see figure again). Be it to save cost, to be first to market, to pursue questionable business models or to try new things of which the effects are hardly known, companies can have incentives to pursue business practices that are ethically debatable. For example, a company may develop an AI software for predictive policing or border control where it is not fully transparent how recommendations made by this software are coming about, what the data quality is, etc. When companies are in these situations today the often play “the borderline game”. They do this in two ways:

  1. They influence the borderline by setting de facto standards. They push rapidly into markets setting standards that are then in the market with quite a lot of ethical flaws. Examples are Uber and Facebook who confront a lot of criticism these days around diverse ethical issues after the fact (such as hate speech, privacy, contractual arrangements with employees, etc. ).
  2. Or, secondly, companies actively work in official technical standardization bodies (such as CEN, ISO, IEEE, the WWW Forum, etc…) to ensure that technical standards are compatible with their business models and/or technical practices.

In both of these cases, companies prioritize the pursuit of their business more than they typically care for ethical externalities. How can regulators handle this mechanism?

To address problem 1 – sudden de facto standards – regulators need to set barriers of entry into their markets. For instance, they can demand any external technology brought to market to go through an ethical certification process. Europe should be thinking hard on what it lets in and what not.

To tackle problem 2 – companies influencing proper standardization processes – regulators must pay more attention to the games played at the official standardization bodies to ensure that unethical borderline technologies are not actually standardized.

So to sum up, there are these three tasks for regulators when it comes to tech-ethics:

  1. Regional legislators always have to look into ethical borderline behaviour of foreign technology to protect relevant values of their societies.
  2. Regulators need to set barriers of entry into their markets; i.e. by testing and ethically challenging what’s build and sold in one’s market. Europe should be thinking hard on what it lets in and what not.
  3. Regulators must also watch the games played at standardization bodies to ensure that unethical borderline technologies are actually legitimized through standardization.

Are we in Europe prepared to live up to address these 3 tasks? I am not sure. Because a first questions remains in the dark when it comes to Europe’s trans-regional political construct: Who is “the regulator”? Is it the folks in Brussels who pass some 70% of legislations for the regions today? Or is it the national governments?

Lets say that when it comes to technology, regulation should be proposed in Brussels so that Europe as a region is a big enough internal market for regional technologies to flourish while engaging in healthy completion with the rest of the world. But even then we have to ask who is “Brussels”? Who “in Brussels”? When we ask who the regulator is, we should not forget that behind the veil of “DG bureaucracy” it is really individual people who we are talking about. People who play, are or believe themselves to be “the regulator”.  And so the very first question when it comes to ethical regulation just as much as ethical standardization is who are actually the people involved in these practices?  Do they truly pursue regional interests? For instance, European interests? A good way to answer this question is to ask on whose payroll they are or who sponsors their sabbaticals, their research institutes, etc.

For example: When there is a High-Level-Expert-Group (HLEG) on AI Ethics it is worthwhile asking: Who are the people administering the master of the group’s recommendation documents? Are these people paid for by European taxpayers? Or are these people paid for by US corporations? We need transparency on both. Because it is this HLEG that is likely to pass recommendations on both legislation and standardization.

Lets presume in this example, they are all people paid for by European tax payers. Another set of questions, demanded by the ethical matter specifically, is: What concept of a person (idea of man, “Menschenbild”) do these people have? Do they believe in the grace and dignity of human beings and do they respect humans as they are with all their weaknesses? Do they have a loving attitude towards mankind or do they think – as many do these days! – that current analogue humanity is the last suboptimal generation of its kind? Short: Who do we actually entrust with regulation in sensitive ethical areas, such as the ethics of AI? As we move into more and more sensitive ethical and social matters with technology I think these questions need to be asked.

As we pass HLEG recommendations into standardization or even regulation; as we establish standards and make these standards part of the law, we need boards, such as the former Art. 29 Working Group for data protection, which are (1) recognized domain experts and (2) well respected individuals who can be entrusted with judging on whether a standard (or even a law) is actually living up to ethical standards. I would like to call such a board “guardians of ethics”. Guardians of ethics should be respected as a serious entity of power. A group that has the power to turn down legislative proposals and standards. A group that inserts into the system a new kind of separation of powers, between lobby-infused regulators and business-driven standard makers on one side and the public interest on the other. Today we only find this separation of powers with the high courts. The high courts decide on the borderline between acceptable and unacceptable technology design. But high courts come too late in the process. Faced with the rapid diffusion of new technologies ethical judgements should come before a technology deployment; before de facto standards inhibit the law and before a technical standard is released. Societal costs are too high to make ethical judgements on technology only after the fact and at late points in time of a technology‘s existence. Any standardization process and any law on ethics in technology should be passed by guardians of ethics who can challenge proposals before bad developments can unravel. Guardians of the ethics have the power to stop what is unwanted.

But in fact, AI is a highly technical matter and when it comes to technical standardization, ethics is a relatively new field. Technology standardization is traditionally dealing with protocols, with hardware specifications, etc.. The fuzzy domain of ethics as well as the context-sensitivity of any ethical matter seems almost contrary to the straight und homogenous logic of the engineering world.

A lot of attention is paid these days to whether and how AI should be regulated to ensure its ethics.  A High-Level-Expert Group in Brussels has started the usual European “stakeholder-carousel” and called for input to their first ideas.

A first question in this challenge is therefore to ask what ethics is in the first place. While the philosophical world has worked over 2000 years on this question, ethics means – in a nutshell –  to do the right thing in the right way and to be a good person in doing so. In other words: to act – ideally as an exemplary role model – such that you or your company contributes to the good of society. To create positive externalities and avoid negative externalities. To create wellbeing in this world and combat the opposite.

Not everyone is equally good at acting ethically. Most humans learn pretty much to stay in the green area (see figure 1). This is what we learn in childhood, as part of our upbringing or by growing up as a member of society. In contrast to this good or ethical behaviour, there is also bad behaviour or what the law recognizes as criminal behaviour. Fairy tales call it “evil”. Between these two extremes between good and bad behaviour, between good and evil, there is some kind of borderline behaviour; a behaviour Germans would call „grenzwertig“ or „marginal“. The law is demarcating the line where this marginal behaviour isn‘t acceptable any more; where a practice is so bad that it is not legitimate any more; an extreme point where the rights of the people, nature or society are undermined by actors in such a way that these should be sanctioned to ensure the long-term stability of society.

How the Use of ‘Ethical’ Principles Hijacks Fundamental Freedoms: The Austrian Social Media Guidelines on Journalists’ Behaviour

August 8, 2018 in Opinion

A guest opinion piece by Eliska Pirkova

The recent draft of the Social Media Guidelines targeting journalists working for the public Austrian Broadcasting Corporation (ORF) is a troubling example of how self-regulatory ethical Codes of Conduct may be abused by those who wish to establish a stricter control over the press and media freedom in the country. Introduced by the ORF managing director Alexander Wrabetz as a result of strong political pressure, the new draft of the ethical guidelines seeks to ensure the objectivity and credibility of the ORF activities on Social Media. Indeed, ethical guidelines are common practice in media regulatory framework across Europe. Their general purpose is already comprised in its title: to guide. They mainly contain ethical principles to be followed by journalists when performing their profession. In other words, they serve as the voice of reason, underlining and protecting the professional integrity of journalism.

But the newly drafted ORF Guidelines threaten precisely what their proponents claim to protect: independence and objectivity. As stipulated in the original wording of the Guidelines from 2012, they should be viewed as recommendations and not as commands. Nonetheless, their latest draft released in June 2018 uses a very different tone. The document creates a shadow of hierarchy by forcing every ORF-journalist to think twice before they share anything on their social media. First, it specifically stipulates that“public statements and comments in social media should be avoided, which are to be interpreted as approval, rejection or evaluation of utterances, sympathy, antipathy, criticism and ‘polemics’ towards political institutions, their representatives or members.”Every single term used in the aforementioned sentence, whether it is ‘antipathy’ or ‘polemics,’ is extremely vague in its core. Such a vagueness enables the inclusion of any critical personal opinion aiming at the current establishment, no matter of how objective, balanced or well-intended the critique may be.

Second, the Guidelines asks journalists to refrain from “public statements and comments in social media that express a biased, one-sided or partisan attitude, support for such statements and initiatives of third parties and participation in such groups, as far as objectivity, impartiality and independence of the ORF is compromised. The corresponding statements of opinion can be made both by direct statements and indirectly by signs of support / rejection such as likes, dislikes, recommendations, retweets or shares.” Here again, the terms such as partisan opinions are very problematic. Does the critique of human rights violations or supporting the groups fighting the climate change qualify as biased? Under this wording, the chilling effect on the right to freedom of expression is inevitable, when journalists may choose to rather self-censor in order to avoid difficulties and further insecurities in their workplace. At the same time, securing the neutrality of the main public broadcaster in the country cannot be exercised by excluding the plurality of expressed opinions. Especially when the neutrality principle seeks to protect the latter.

Media neutrality is necessary for the impartial broadcasting committed to the common good. In other words, it reassures that the misuse of media for any propaganda and other forms of manipulation will not occur. Therefore, in order for media to remain neutral, the diversity of opinions is absolutely essential, as anything else is simply incompatible with the main principles of journalistic work. The primary duty of the press is to monitor and to inform whether the rule of law is in tact and fully respected by the elected government. Due to its great importance in preserving democracy, the protection of the free press is enshrined within the national constitutions as well as enforced by domestic media laws. The freedom of expression is not only about the right of citizens to write or to say whatever they want, but it is mainly about the public to hear and to read what it needs (Joseph Perera & Ors v. Attorney-General). In this vein, the current draft of the Guidelines undermines the core of journalism by its intentionally vague wording and by misusing or rather twisting the concept of media neutrality.

Although not legally binding document, the Guidelines still impose a real threat to democracy. This is the typical example of ethics and soft law self-regulatory measures becoming a gateway for more restrictive regulation of press freedom and media pluralism. Importantly, the non-binding nature of the Guidelines serves as an excuse for policy makers who defend its provisions as merely ethical principles for journalists’ conduct and not the legal obligations per sei, enforced by a state agent. However, in practice, the independent and impartial work of journalists is increasingly jeopardised, as every statement, whether in their personal or professional capacity, is subjected to much stricter self-censorship in order to avoid further obstacles to their work or even an imposition of ‘ethical’ liability for their conduct. If the current draft is adopted as it stands, it will provide for an extra layer of strict control that aims to silence the critique and dissent.

From the fundamental rights perspective, The European Court of Human Rights (ECt.HR) stated on numerous occasions the vital role of the press, being a public watchdog (Goodwin v. the United Kingdom). Freedom of press is instrumental for public to discover and to form opinions of the ideas and attitudes held by their political leaders. At the same time, it provides the politicians with the opportunity to react and comment on the public opinion. Therefore, healthy press freedom is a ‘symptom’ of a functioning democracy. It enables everyone to participate in the free political debate, which is at the very core of the concept of democratic society (Castells v. Spain). When democracy starts fading away, weakening the press freedom is the first sign that has to be taken seriously. It is very difficult to justify why restricting journalists’ behaviour, or more precisely, the political speech on their private Facebook or Twitter accounts should be deemed as necessary in a democratic society or should pursue any legitimate aim. The Constitutional Courts that follow and respect the rule of law could never find such a free speech restriction legitimate. It also opens the question about the future of Austrian medias’ independence, especially when judged against the current government’ ambitious planto transform the national media landscape.

When in 2000, the radical populist right Freedom Party (FPO) and the conservative ÖVP formed the ruling coalition, the Austrian government was shunned by European countries and threatened with EU sanctions. But today’s atmosphere in Europe is very different. Authoritative and populist regimes openly undermining democratic governance are a new normal. Under such circumstances, human rights of all of us are in danger due to a widespread democratic backsliding present in the western countries as much as in the eastern corner of the EU. Without a doubt, journalists and the media outlets have a huge responsibility to impartially inform the public on matters of public interest.  Ethical Codes of Conduct thus play a crucial role in the journalistic work, acknowledging a great responsibility to report accurately, while avoiding prejudice or any potential harm to others. However, when journalists’ freedom of expression is being violated, the right to receive and impart information of all of us is in danger, and so is democracy.  Human Rights and Ethics are two different things. One cannot be misused to unjustifiably restrict the other.

2 days to GDPR: Standards and Regulations will always lag behind Technology – We still need them… A Blog by Axel Polleres

May 23, 2018 in Opinion

In the light of the near coming-into-effect of the European General Data Protection Regulation (GDPR) in 2 days from now, there is a lot of uncertainty involved. In fact, many view the now stricter enforcement of data protection and privacy as a late repair to the harm already done, in the context of recent scandals such as the Facebook/Cambridge Analytica breach, which caused a huge discussion about online privacy over the past month, culminating in Mark Zuckerberg’s testimony in front of the senate.

“I am actually not sure we shouldn’t be regulated” Mark Zuckerberg in a recent BBC interview.

Like for most of us, my first reaction to this statement was a feeling of ridiculousness, that in fact it is already far too late and that while such an incident as the Cambridge Analytica scandal was foreseeable (as for instance indicated by Tim Berners-Lee’s reaction to his Turing award back in 2017 already). So many of us may say or feel that the GDPR is coming too late.

However, we see another effect of regulations and standards than sheer prevention of such things happening: cleaning up after the mess.

(Source: uplodaded by Michael Meding to de.wikipedia)

This is often the role of regulations and also, likewise in a similar way the role of (technology) standards.

Technology standards vs legal regulations – not too different.

Given my own experiences on contributing to the standardisation of a Web Data query language, SPARQL1.1, this was very much our task: cleaning up and aligning diverging implementations of needed additional features which have been implemented in different engines to address user’s needs. Work in standards often involves compromises (also a parallel to legislation), so whenever being confronted with this or that not being perfect in the standard we created, that’s normally the only response I have… we’ll have to fix it in the next version of the standard.

Back to the privacy protection regulation, this is also what will need to happen, we now have a standard, call it “GDPR 1.0”, but it will take a while until its implementors, the member states of the EU, will have collected enough “implementation experiencee” to get through suggestions for improvements.

Over time, hopefully enough such experience will emerge to recollect best practices and effective interpretations of  the parts of the GDPR that still remain highly vague: take for instance, what does it mean that “any information and communi­cation relating to the processing of those personal data be easily accessible and easy to understand” (GDPR, recital 39)

The EU will need do continue to work towards GDPR1.1, i.e. to establish best practices and standards that clarify these uncertainties, and offer workable agreed solutions, ideally based on open standards.

Don’t throw out the baby with the bathtub

Yet, there is a risk: voices are already raising that GDPR will be impossible to execute in its full entirety, single member states try already to implement “softened” interpretations of GDPR (yes, it is indeed my home country…), or ridiculous business model ideas such as GDPRShield, are mushrooming to e.g. exclude European customers entirely, in order to avoid GDPR compliance.

There are three ways the European Union can deal with this risk:

  • Soften GDPR or implement it faintheartedly – not a good idea, IMHO, as any loopholes or exceptions around GDPR sanctions will likely put us de facto back into pre-GDPR state.
  • Stand with GDPR firmly and strive for full implementation of its principles, start working on GDPR1.1 in parallel, that is amending best practices and also technical standards which make GDPR work and help companies to implement it.

In our current EU project SPECIAL, which I will also have the opportunity to present again later this year at MyData2018 (in fact, talking about our ideas for standard formats to support GDPR compliant, interoperable recording of consent and personal data processing), we aim at supporting the latter path. First steps to connect both, GDPR legal implementation and working on technical standard, towards such a “GDPR1.1”, supported by standard formats for interoperability and privacy compliance controls, have been taken in a recent W3C workshop in my home university in Vienna, hosted by our institute a month ago.

Another example: Net Neutrality

As a side note, earlier in this blog, I mentioned the (potentially unintended) detrimental effects that giving up net neutrality could have on democracy and freedom of speech. In my opinion, net neutrality is the next topic we need to think about in terms of regulations in the EU as well; dogmatic rules won’t help. Pure net neutrality is no longer feasible, it’s probably gone and a thing of the past, where data traffic was not an issue of necessity. In fact, regulating the distribution of data traffic may be justifiable by commercial (thanks to Steffen Staab for the link) or by even non-commercial interests. For instance optimizing energy consumption: the tradeoffs need to be wisely weighed against each other and regulated, but again, throwing out the baby with the bathtub, as now potentially happened with the net neutrality repeal in the US should be avoided.

Javier D. Fernández – Green Big Data

April 18, 2018 in Opinion

ol{margin:0;padding:0}table td,table th{padding:0}.c1{border-right-style:solid;padding:5pt 5pt 5pt 5pt;border-bottom-color:#000000;border-top-width:1pt;border-right-width:1pt;border-left-color:#000000;vertical-align:top;border-right-color:#000000;border-left-width:1pt;border-top-style:solid;background-color:#fff2cc;border-left-style:solid;border-bottom-width:1pt;width:134.2pt;border-top-color:#000000;border-bottom-style:solid}.c20{border-right-style:solid;padding:5pt 5pt 5pt 5pt;border-bottom-color:#000000;border-top-width:1pt;border-right-width:1pt;border-left-color:#000000;vertical-align:top;border-right-color:#000000;border-left-width:1pt;border-top-style:solid;background-color:#f1c232;border-left-style:solid;border-bottom-width:1pt;width:134.2pt;border-top-color:#000000;border-bottom-style:solid}.c6{border-right-style:solid;padding:5pt 5pt 5pt 5pt;border-bottom-color:#000000;border-top-width:1pt;border-right-width:1pt;border-left-color:#000000;vertical-align:top;border-right-color:#000000;border-left-width:1pt;border-top-style:solid;border-left-style:solid;border-bottom-width:1pt;width:117pt;border-top-color:#000000;border-bottom-style:solid}.c12{border-right-style:solid;padding:5pt 5pt 5pt 5pt;border-bottom-color:#000000;border-top-width:1pt;border-right-width:1pt;border-left-color:#000000;vertical-align:top;border-right-color:#000000;border-left-width:1pt;border-top-style:solid;border-left-style:solid;border-bottom-width:1pt;width:147.8pt;border-top-color:#000000;border-bottom-style:solid}.c9{border-right-style:solid;padding:5pt 5pt 5pt 5pt;border-bottom-color:#ffffff;border-top-width:1pt;border-right-width:1pt;border-left-color:#ffffff;vertical-align:top;border-right-color:#000000;border-left-width:1pt;border-top-style:solid;border-left-style:solid;border-bottom-width:1pt;width:69pt;border-top-color:#ffffff;border-bottom-style:solid}.c2{color:#000000;font-weight:400;text-decoration:none;vertical-align:baseline;font-size:11pt;font-family:”Arial”;font-style:normal}.c18{padding-top:0pt;padding-bottom:0pt;line-height:1.15;orphans:2;widows:2;text-align:center}.c0{padding-top:0pt;padding-bottom:0pt;line-height:1.15;orphans:2;widows:2;text-align:justify}.c5{padding-top:0pt;padding-bottom:0pt;line-height:1.15;orphans:2;widows:2;text-align:left}.c10{color:#000000;text-decoration:none;vertical-align:baseline;font-size:11pt;font-family:”Arial”;font-style:normal}.c4{padding-top:0pt;padding-bottom:0pt;line-height:1.0;text-align:left}.c15{padding-top:0pt;padding-bottom:0pt;line-height:1.0;text-align:right}.c17{border-spacing:0;border-collapse:collapse;margin-right:auto}.c8{text-decoration-skip-ink:none;-webkit-text-decoration-skip:none;color:#1155cc;text-decoration:underline}.c19{background-color:#ffffff;max-width:468pt;padding:72pt 72pt 72pt 72pt}.c11{color:inherit;text-decoration:inherit}.c13{height:0pt}.c16{font-style:italic}.c14{background-color:#cccccc}.c3{height:11pt}.c7{font-weight:700}.title{padding-top:0pt;color:#000000;font-size:26pt;padding-bottom:3pt;font-family:”Arial”;line-height:1.15;page-break-after:avoid;orphans:2;widows:2;text-align:left}.subtitle{padding-top:0pt;color:#666666;font-size:15pt;padding-bottom:16pt;font-family:”Arial”;line-height:1.15;page-break-after:avoid;orphans:2;widows:2;text-align:left}li{color:#000000;font-size:11pt;font-family:”Arial”}p{margin:0;color:#000000;font-size:11pt;font-family:”Arial”}h1{padding-top:20pt;color:#000000;font-size:20pt;padding-bottom:6pt;font-family:”Arial”;line-height:1.15;page-break-after:avoid;orphans:2;widows:2;text-align:left}h2{padding-top:18pt;color:#000000;font-size:16pt;padding-bottom:6pt;font-family:”Arial”;line-height:1.15;page-break-after:avoid;orphans:2;widows:2;text-align:left}h3{padding-top:16pt;color:#434343;font-size:14pt;padding-bottom:4pt;font-family:”Arial”;line-height:1.15;page-break-after:avoid;orphans:2;widows:2;text-align:left}h4{padding-top:14pt;color:#666666;font-size:12pt;padding-bottom:4pt;font-family:”Arial”;line-height:1.15;page-break-after:avoid;orphans:2;widows:2;text-align:left}h5{padding-top:12pt;color:#666666;font-size:11pt;padding-bottom:4pt;font-family:”Arial”;line-height:1.15;page-break-after:avoid;orphans:2;widows:2;text-align:left}h6{padding-top:12pt;color:#666666;font-size:11pt;padding-bottom:4pt;font-family:”Arial”;line-height:1.15;page-break-after:avoid;font-style:italic;orphans:2;widows:2;text-align:left}

I have a MSc and a PhD degree in Computer Science, and it’s sad (but honest) to say that in all my academic and professional career the word “privacy” was hardly mentioned. We do learn about “security” but as a mere non-functional requirement, as it is called. Don’t get me wrong, I do care about privacy and I envision a future where “ethical systems” are the rule and no longer the exception, but when people suggest, promote or ask for privacy-by-design systems, one should also understand that we engineers (at least my generation) are mostly not yet privacy-by-design educated.

That’s why, caring about privacy, I like it so much to read diverse theories and manifestos providing general principles to come up with ethical, responsible and sustainable designs for our systems, in particular where personal Big Data (and all the variants, i.e. Data Science) is involved. The Copenhague letter (promoting open humanity-centered designs to serve society), the Responsible Data Science principles (fairness, accuracy, confidentiality, and transparency) and the Ethical Design Manifesto (focused on maximizing human rights and human experience and respect human effort) are good examples, to name but a few.

Acknowledging that these are inspiring works, an engineer might find the aforementioned principles a bit too general to serve as an everyday reference guide for practitioners. In fact, one could argue that they are deliberately open for interpretation, in order to adapt them to each particular use case: they point to the goal(s) and some intermediate stones (i.e. openess or decentralization), while the work of filling up all the gaps is by no means trivial.

Digging a bit to find more fine-grained principles, I thought of the concept of Green Big Data, to refer to Big Data made and use in a “green”, healthy fashion, i.e, being human-centered, ethical, sustainable and valuable for the society. Interestingly, the closest reference for such term was a highly cited article from 2003 regarding “green engineering” [1]. In this article, Anastas and Zimmerman inspected 12 principles to serve as a “framework for scientists and engineers to engage in when designing new materials, products, processes, and systems that are benign to human health and the environment”.

Inspired by the 12 principles of green engineering, I started an exercise to map such principles to my idea of Green Big Data. This map is by no means complete, and still subject to interpretation and discussion. Ben Wagner and my colleagues at the Privacy & Sustainable Computing Lab provided valuable feedback and encouraged me to share these principles with the community in order to start a discussion openly and widely. As an example, Axel Polleres already pointed out that “green” is interpreted here as mostly covering the privacy-aware aspect of sustainable computing, but other concepts such as “transparency-aware” (make data easy to consume) or “environmentally-aware” (avoid wasting energy by letting people run the same stuff over and over again) could be further developed.

You can find the Green Big Data principles below, looking forward for your thoughts!

12 Principles of Green Engineering

12 Principles of Green Big Data

Related topics

Principle 1

Designers need to strive to ensure that all material and energy inputs and outputs are as inherently non-hazardous as possible.

Big Data inputs, outputs and algorithms should be designed to minimize exposing persons to risk.

Security, privacy, data leaks, fairness, confidentiality, human-centric

Principle 2

It is better to prevent waste than to treat or clean up waste after it is formed.

Design proactive strategies to minimize, prevent, detect and contain personal data leaks and misuse.

Security, privacy, accountability, transparency

Principle 3

Separation and purification operations should be designed to minimize energy consumption and materials use.

Design distributed and energy-efficient systems and algorithms that require as little personal data as possible, favoring anonymous and personal-independent processing.

Distribution, anonymity, sustainability

Principle 4

Products, processes, and systems should be designed to maximize mass, energy, space, and time efficiency.

Use the full capabilities of existing resources and monitor that it serves the needs of individuals and the society in general.

Sustainability, human-centric, societal challenges, accuracy

Principle 5

Products, processes, and systems should be “output pulled” rather than “input pushed” through the use of energy and materials.

Design systems and algorithms to be versatile, flexible and extensible, independently of the scale of the personal data input.

Sustainability,

scalability

Principle 6

Embedded entropy and complexity must be viewed as an investment when making design choices on recycle, reuse, or beneficial disposition.

Treat personal data as a first-class but hazardous citizen, with extreme precautions in third-party personal data reuse, sharing and disposal.

Privacy, confidentiality, human-centric

Principle 7

Targeted durability, not immortality, should be a design goal.

Define the “intended lifespan” of the system, algorithms and involved data, and design them to be transparent by subjects, who control their data.

Transparency, openness, right to amend and to be forgotten,

human-centric

Principle 8

Design for unnecessary capacity or capability (e.g., “one size fits all”) solutions should be considered a design flaw.

Analyze the expected system/algorithm load and design it to meet the needs and minimize the excess.

Sustainability, scalability, data leaks

Principle 9

Material diversity in multicomponent products should be minimized to promote disassembly and value retention.

Data and system integration must be carefully designed to avoid further personal data risks.

Integration, confidentiality, cross-correlation of personal data

Principle 10

Design of products, processes, and systems must include integration and interconnectivity with available energy and materials flows.

Design open and interoperable systems to leverage the full potential of existing systems and data, while maximizing transparency for data subjects.

Integration, openness

Interoperability, transparency

Principle 11

Products, processes, and systems should be designed for performance in a commercial “afterlife”.

Design modularly for the potential system and data obsolescence, maximizing reuse.

Sustainability, Obsolescence

Principle 12

Material and energy inputs should be renewable rather than depleting.

Prefer data, systems and algorithms that are

open, well-maintained and sustainable in the long term.

Integration, openness

Interoperability, sustainability

 

[1] Anastas, P. & Zimmerman, J. 2003. Design through the 12 principles of green engineering. Environmental Science and Technology 37(5):94A–101A

Axel Polleres: What is “Sustainable Computing”?

March 20, 2018 in Opinion

Blog post written by Axel Polleres and originally posted on http://doingthingswithdata.wordpress.com/

A while ago, together with colleagues Sarah Spiekermann-Hoff, Sabrina Kirrane, and Ben Wagner (who joined in a bit later) we founded a joint research lab, to foster interdisciplinary discussions on how information systems can be build in a private, secure, ethical, value-driven, and eventually more human-centric manner.

We called this lab the Privacy & Sustainable Computing Lab to provide a platform to jointly promote and discuss our research and views and provide a think-tank on how these goals can be achieved, also open to others. Since then, we had many partially heated but first and foremost always very rewarding discussions, to create mutual understanding between researchers coming from an engineering, AI, social sciences, or legal background, on how to address challenges around digitization.

Not surprisingly, the first (and maybe still unresolved) discussion was about how to name the lab. Back then, our research was very much focused on privacy, but we all felt that the topic of societal challenges in the context of the digital age need to be viewed broader. Consequently, one of the first suggestions floating around was “Privacy-aware and Sustainable Computing Lab“, emphasizing on privacy-awareness as one of the main pillars, but with the aim for a broader definition of sustainable computing, which we later shortened to just “Privacy & Sustainable Computing Lab” (for merely length reasons, if I remember correctly, my co-founders to correct me if I am wrong 😉 ).

Towards defining Sustainable Computing

On coming up with a joint definition of the term “Sustainable Computing” back then, I answered in an internal e-mail thread that

Sustainable Computing for me encompasses obviously: 

  1. human-friendly 
  2. ecologically-friendly
  3. societally friendly 

aspects of [the design and usage of] Computing and Information Systems. In fact, in my personal understanding these three aspects are – in some contexts – potentially conflicting, but resolving and discussing these conflicts is  one points why we have founded this lab in first place.

Conflicts add Value(s)

Conflicts can arise for instance from individual well-being being weighed higher than ecologic impacts (or vice versa), or likewise in how much a society as a whole needs to respect and protect the individual’s rights and needs, and in which cases (if at all ever) the common well-being should be put above those individual rights.

These are fundamental questions in neither of which I would by any means consider myself an expert, but where obviously, if you think them into design of systems or into a technology research agenda (which would be more my home-turf), then it both adds value and makes us discuss values as such. Conflicts, that is, making value conflicts explicit and resolving conflicts about the understanding and importance of these values is a necessary  part of Sustainable Computing. This is why Sarah suggested the addition of

4. value-based

computing, as part of the definition.

Sabrina added, that although sustainable computing is not mentioned the ideas herein, the notion of Sustainable Computing resonates well with what was postulated in the Copenhagen Letter.

Overall, we haven’t finished the discussion about a crisp definition about what Sustainable Computing is (which is maybe why you don’t find it yet on our Website), but for me this is actually ok: to keep this definition evolving and agile, to keep ready for discussions about it, to keep learning from each other. We’ve also discussed sustainable computing quite extensively in a mission workshop in December 2017, to try to better define what sustainable computing is and how it influences our research.

What I learned mainly is that we as technology experts play a crucial role and carry responsibility in defining Sustainable Computing: by being able to explain limitations of technology but also as advocates of the benefits of technologies, in spite of risks and justified skepticism, and by helping developing technologies to minimize these risks.

Some Examples

Some examples of what falls for me under Sustainable computing:

  • Government Transparency through Open Data, and making such Open Data easily accessible to citizens – we try to get closer to this vision in our national research project CommuniData
  • Building technical infrastructures to support transparency in personal data processing for data subjects, but also to help companies to fulfill the respective requirements in terms of legal regulations such as the GDPR – we are working on such an infrastructure in our EU H2020 project SPECIAL
  • Building standard model processes for value-based, ethical system design, as the IEEE P7000 group does it (with involvement of my colleague Sarah Spiekermann).
  • Thinking about how AI can support ethics (instead of fearmongering the risks of AI) – we will shortly publish a special issue on some examples in a forthcoming volume of ACM Transactions on Internet Technologies (TOIT)
  • Studying phenomena and social behaviours online with the purpose of detecting and pinpointing biases as for example our colleagues at the Complexity Science Hub Vienna do in their work on Computational Social Sciences, understanding Systemic Risks and Socio-Economic Phenomena

Many more such examples are hopefully coming out of our lab through cross-fertilizing, interdisciplinary research and discussions in the years to come…

 

Let’s Switch! Some Simple Steps for Privacy-Activism on the Ground

March 13, 2018 in Opinion

by Sarah Spiekermann, Professor of Business Informatics & Author,

Vienna University of Economics and Business, Austria

Being an “activist” sounds like the next big hack in order to change society for the better; important work done by really smart and courageous people. But I wonder whether these high standards for activism suffice to really change things on the ground. I think we need more: We need activism on the ground.

What is activism on the ground?

By activism on the ground I mean all of us need to be involved: anyone who consumes products and services. Anyone who currently does not engage in any of those “rational choices” that economists ascribe to us. Lets become rational! Me, you, we all can become activists on the ground and make markets move OUR way. How? By switching! Switching from the products and services that we currently buy and use, where we feel that the companies who provide us with these services don’t deserve our money or attention or – most importantly – any information about your private life.

For the digital service world I have started to think about how to switch for quite some time. And in November last year I started a project with my Master Class in Privacy & Security at Vienna University of Business and Economics: We went out and tested the market leading Internet Services that most of us use. We looked into their privacy policies and checked to what extent they give us fair control over our data or – in contrast – hide important information from us. We benchmarked the market leaders with their privacy-friendly competitors. We looked at their privacy defaults and the information and decision control they give us over our data. To check whether switching to a privacy-friendly alternative is a realistic option. We also compared all services’ user experience (nothing is worse than functional but unusable security…). And guess what? Ethical machines are indeed out there.

So why not switch?

Here is the free benchmark study for download that gives you the overview.

Switching your messenger services

For the messenger world, I can personally recommend Signal, which works just as well as WhatsApp does; only that it is blue instead of green. I actually think that WhatsApp does not deserve to be green, because the company shares our contact network information with anyone interested in buying it. My students found that Signal’s privacy design is not quite as good as Wickr Me. I must admit that I had some trouble using Signal on my new GSMK Cryptophone where I obviously reject the idea of installing GooglePlay; but for normal phones Signal works just fine.

Switching your social network

When it comes to social networks, I quit Facebook long ago. I thought the content got a bit boring in these past 4-5 years as people have started to become more cautious in posting their really interesting stuff. I am on Twitter and find it really cool, but the company’s privacy settings and controls are not good. We did not test for Twitter addictiveness …

I signed up with diaspora* which I have known for a long time, because its architecture and early set-up was done by colleagues in the academic community. It is building on a peer-to-peer infrastructure and hence possesses the architecture of choice for a privacy-friendly social network. Not surprisingly, my students found it really good in terms of privacy.  I am not fully done with testing it myself. I certainly hate the name “diaspora”, which is associated with displacement from your homeland. The name signals too much negativity for a service that is actually meant to be a save haven. But other than that I think we should support it more. Interesting enough my students also benchmarked Ello, that is really a social network for artists by now. But as Joseph Beuys already famously proclaimed “Everyone is an artists”, right? I really support this idea! And since their privacy settings are ok (just minor default issues…), this is also an alternative for creative social nomads to start afresh.

Switching your maps service

HERE WeGo is my absolute favorite when it comes to a location service. And this bias has a LONG history, because I already knew the guys who build the service in its earliest versions back then in Berlin (at the time the company was called Gate5). Many of this service’s founding fathers were also members of the Chaos Computer Club. And guess what: when hackers build for themselves, they build really well.

For good reasons my students argue that OSMAND is a great company as well. Especially their decisional data control seems awesome. No matter what you do: Don’t waste your time throwing your location data into the capitalist hands of Google and Apple. Get rid of them! And Maps.me and Waze are not any better according to our benchmark. Location services that don’t get privacy right are the worst we can carry around with us, because letting anyone know where we are at any point in time is really stupid. If you don’t switch for the sake of privacy, switch for the sake of activism.

Switching E-Mail services

I remember when a few of my friends started to be beta-users of gmail. Everyone wanted to have an account. But ever since Google decided to not only scan all our e-mails for advertising purposes but also combine all this knowledge with everything else we do with them (including search, YouTube, etc.) As a result I turned away from the company. I do not even search with Google anymore, but use Startpage as a very good alternative.

That said, gmail is really not the only online mail provider that scans all you write and exchange with others. As soon as you handle your e-mail in the cloud with free providers you must kind of expect that this is the case. My students therefore recommend to switch to Runbox. It is a pay-for e-mail service, but the price is really affordable starting with € 1,35 per month with the smallest package and below € 5 for a really comfortable one. Also: Runbox is a hydropowered e-mail service. So you also do something good for the environment supporting them. An alternative to Runbox is Tutanota. Its usability was rated a bit weaker in comparison to Runbox, but it is available for free.

Switching Calender Systems

Calendars are next to our physical locations and contact data an important service to care about when it comes to privacy. After all, the calendar tells whether you are at home or not at a certain time. Just imagine an online calendar was hacked and your home broken into while you are not there. These fears were pretty evident in class discussions I had with my students who created the benchmark study and we therefore compared calendar apps as well. All the big service providers are really not what you want to use. Simple came up as the service of choice you can use on your phone; at least if you have an Android operating system. If you do not have the calendar on you phone or no Android, Fruux is the alternative of choice for you.

In conclusion, there are alternatives available and you can make meaningful choices about your privacy. The question is now, will you be willing to do so?