Giving the Law a “Like”? Matthias Kettemann’s Comments on Austria’s draft Communications Platforms Law

October 10, 2020 in Opinion

Portrait Kettemann (c) Universität Graz

Matthias C. Kettemann *

Hatred on the net has reached unbearable proportions. With the Platform Act, the legislator is doing many things right. But the “milestone” for more online protection still has a few edges.

Germany has one, France wanted one, Turkey and Brazil are working on one right now – and Austria has pushed forward with one: it is all about a law that makes platforms more responsible for online hatred and imposes transparency obligations on them. The draft of the Communication Platforms Act (KoPl-G) was notified to the Commission this week.

First of all: With regard to victim protection, the legislative package against hate on the net is exactly what Minister for Women’s Affairs Susanne Raab (ÖVP) calls it: a milestone. The protection of victims from threats and degradation is of great importance. This is achieved by better protection against hate speech, aggravation of the offence of incitement to hatred, faster defense against cyberbullying and the prohibition of “upskirting”. However, the Communication Platforms Act (KoPl-G) still has certain edges.

What looks well 

The present draft is legally well done. Even the name alone is better than the German Network Enforcement Act, which seems strange at least in its short title (which network should be enforced?). In the impact assessment, the government also demonstrates a shaken measure of humility and makes it clear that a European solution against hate on the net and the greater involvement of platforms would have been better, but that takes time, so national measures would have to be taken.

The government has learned from the German NetzDG. The draft takes the good parts of the NetzDG (national authorized recipient; transparency reports) with it, avoids its gaps (put-back claim) and is on a firmer human rights footing than the overshooting French Loi Avia.

What is good is that there is no obligation to use clear names and that platforms do not have to keep user registers. There is also no federal database for illegal content, as provided for in the German revision of the NetzDG. The reporting deadlines correspond to those in the NetzDG and are sensible; the transparency obligations are not particularly detailed, but are essentially just as correct. According to Internet expert Ben Wagner of the Vienna University of Economics and Business Administration,  the possibility of accessing payments from advertising customers when platforms default on appointing an authorized recipient is “charming”. Another good example is §9 (3) KoPl-G, which explicitly excludes the general search obligation (“general monitoring”), which is prohibited under European law anyway.

Legal protection

The means of legal protection are important: If users are dissatisfied with the reporting procedure or the review process within the platform, they can turn to a complaints body through a conciliation procedure (established with RTR), which will then propose an amicable solution. KommAustria is the supervisory authority. Human rights expert Gregor Fischer from the University of Graz asks “whether the 240,000 euros earmarked for this purpose will be enough.”

Nevertheless, the conciliation procedures, if well designed, can have a kind of mini-oversight board function, where the complaints office sets up content standards. To do this, however, it must urgently enter into consultations with as broad a circle of the Austrian network community as possible. The Internet Governance Forum, which the University of Graz is organizing this fall, would be a good first place to start. Parallel to this, civil law (judicial deletion via dunning procedure using an online form at the district courts) and criminal law ways of legal protection against online hate are being simplified (elimination of the cost risk of acquittals after private prosecution offences, judicial investigation of suspects), so that the package does indeed amend Austrian “Internet law” in its entirety.

The fact that, as Amnesty Austria criticizes, review and complaint procedures are initially located on the platforms is difficult to solve otherwise without building up a massive state parallel justice system for online content. Therefore, private actors must also – first of all – decide which content stays online. However – and this is important – they do not make the final decision on the legality of this content, but only whether they consider it legal or in conformity with the General Terms and Conditions.

What can we do better?

There is still a fly in the platform regulation ointment: When Minister of Justice Alma Zadić says that it has been made clear “that the Internet is not a lawless space”, it sounds a bit like the reference of German Chancellor Merkel to the Internet as “Neuland”, “uncharted territory”. This is not a good narrative, even if the platforms in particular appear, if not as lawless zones, then at least as zones with limited (and time-consuming) modalities of legal protection, as has not only become apparent since online hate campaigns against (especially) women politicians.

Rather too robust is the automatism that after five “well-founded” complaints procedures, the supervisory authority should already take action. Here it would probably be wiser to increase the number. The law gives KommAustria substantial powers anyway, even though the law initially provides for a mandate to improve the platform. In Germany, the platforms have repeatedly tried to discuss the optimal design of complaints procedures with the responsible Federal Office of Justice, but the latter could only take action in the form of “So nicht” decisions. Here KommAustria can develop best practice models for the optimal design of moderation procedures, for example with regard to the Santa Clara Principles, after appropriate public consultation with all relevant stakeholders. It is also prudent to note that in the course of the appeal procedure, RTR will not take sovereign action for the time being.

It also needs to be clarified which platforms exactly are covered. Why should forums of online games become reportable, but not the commentary forums of newspapers? The latter seem to be of greater relevance for public discourse. Epicenter.works also points out that the exception for Wikipedia overlooks other projects like Wikicommons and Wikidata.

More light  

We need even more transparency: in addition to general statements on efforts to combat illegal content, the reporting obligation should at least include a summary of the changes made to the moderation rules during the period under review and a list of the automated moderation tools used, as well as (with a view to the right of explanation in Art. 22 GDPR) their central selection and prioritization logic. This is exactly what is being debated in Brazil.

In Germany, Facebook regularly has very low deletion figures according to NetzDG, because a lot of content is deleted according to community standards. Due to the rather hidden design of the message according to NetzDG, the Federal Office of Justice, which regulates in Germany, has also imposed a penalty. It should therefore be made clear in the draft that the reporting and transparency obligations should also extend to content that is not formally deleted “in accordance with KoPl-G”, as this would otherwise provide a loophole for the platforms.

The danger of overblocking is counterbalanced by a clear put-back claim. Empirical proof that such a thing threatens could not be furnished in Germany so far. However, this is also due to the economical data transfer of the platforms – and the KoPl-G could make some improvements here. The supervisory authority should, in the more detailed provisions on the reporting obligation, instruct the platforms to make not only comparable reports but also disaggregated data available to the public (and to science!) while respecting privacy.

Why it actually works

A basic problem of content moderation, by no means only on Facebook, cannot be solved by even the best law. The actual main responsibility lies with the platforms themselves: They set the rules, they design the automated tools, they delete and flag their employees. Currently, all major platforms follow the approach of leaving as many expressions of opinion online as possible, deleting only the worst postings (e.g. death threats) and adding counter-statements (e.g. warnings) to problematic speech (e.g. disinformation, racism). Covid-19 has only gradually changed this. This is based on a maximization of freedom of expression, which is now unacceptable, at the expense of other important legal interests, such as the protection of the rights of others and social cohesion. The assumption implied by platforms that all forms of speech are in principle to be seen as a positive contribution to diversity of opinion is simply no longer true under today’s communication conditions of the Internet. Today, platforms wander between serious self-criticism (“We can do better”) and hypocritical self-congratulation, they establish internal quasi-judges (Facebook), content rules advisory boards (TikTok) and transparency centers (Twitter), but then erratically delete only the worst content and – especially in the USA – get into ideologically charged controversies. 

For too long have we, as a society, accepted that platforms have no finality beyond profit maximization. There is another way, as Facebook has just shown: A posting published on the company’s internal platform pointed out the “victimization of well-intentioned policemen” by society, ignoring the black population’s systemic experience of violence. The posting led to emotionalized debates. According to the community standards of “normal” Facebook, the statement would have been unobjectionable. However, Mark Zuckerberg found it to be a problem for the conversational and corporate culture within Facebook: He commented that “systemic racism is real” and pointed out to his co-workers that “controversial topics” could only be debated at “specific forums” within the corporate Facebook. These sub-forums should then also receive “clear rules and robust moderation”. So it works after all.

However, as long as this awareness of the problems of platforms does not shape corporate policy and moderation practice across the board, laws such as the KoPl-G are necessary. The Commission now has three months to react. The Austrian authorities need to “stand still” until 2 December.

A marginal detail: The additional costs for the regulatory authority are to be provided by the state from the broadcasting fee: That is naturally simple, but one could also think about tightening the fiscal screws set for platforms.


– For more comments on the law, see the comments by the author, Gregor Fischer and Felicitas Rachinger, to the official review process at the Austrian Parliament.

* PD Mag. Dr. Matthias C. Kettemann, LL.M. (Harvard) is an internet legal expert at the Leibniz Institute for Media Research | Hans-Bredow-Institut (Hamburg), research group leader at the Sustainable Computing Lab of the Vienna University of Economics and Business Administration and lecturer at the University of Graz. In 2019 he advised the German Bundestag on the amendment of the Network Enforcement Act. In 2020 he published “The Normative Order of the Internet” with Oxford University Press.