A Call for Interdisciplinary Collaboration toward the Realization of Needs-aware AI

June 28, 2022 in Opinion
Needs-aware AI

A Call for Interdisciplinary Collaboration toward the Realization of Needs-aware AI

Soheil Human
soheil.human@wu.ac.at

Published: 28.06.2022

Needs-aware AI

Need is one of the most fundamental constructs connected to different dimensions of Human-awareness, Accountability, Lawfulness, and Ethicality (HALE) of sociotechnical systems. This construct, however, has not been well considered in the design, development, evaluation and sustaining of the AI-based sociotechnical systems. In our new article [1], we call for the realization of “Needs-aware AI” through interdisciplinary collaborations.

Footnotes: 

[1] The article can be currently accessed here: https://rdcu.be/cQvQu; the permanent link is: https://doi.org/10.1007/s43681-022-00181-5.

Bibliography:

  • Watkins, R., Human, S. Needs-aware artificial intelligence: AI that ‘serves [human] needs’. AI Ethics (2022). https://doi.org/10.1007/s43681-022-00181-5
  • Lee, K.-F., OReilly, T.: Meet the Expert: How AI Will Change Our World by 2041. OReilly Media, Inc. (2021).
  • Shneiderman, B.: Design lessons from ai’s two grand goals: human emulation and useful applications. IEEE Trans. Technol. Soc. 1(2), 73–82 (2020).
  • Shneiderman, B.: Human-Centered AI. Oxford University Press, Oxford (2022).
  • Human, S., Fahrenbach, F., Kragulj, F., Savenkov, V.: Ontology for representing human needs. In: Różewski, P., Lange, C. (eds.) Knowledge engineering and semantic web communications in computer and information science, pp. 195–210. Springer International Publishing, Cham (2017)
  • OECD Report: Regulation, alternatives traditional. https://www.oecd.org/gov/regulatory-policy/42245468.pdf. Accessed 18 May 2022
  • Human, S., Gsenger, R., Neumann, G.: End-user empowerment: An interdisciplinary perspective. In: Proceedings of the 53rd Hawaii International Conference on System Sciences, Hawaii, United States, pp. 4102–4111 (2020)
  • Human, S., Watkins, R.: Needs and Artificial Intelligence. arXiv (arXiv:2202.04977[cs.AI]) (2022). https://doi.org/10.48550/arXiv.2202.04977
  • Watkins, R., Meiers, M.W., Visser, Y.: A guide to assessing needs: essential tools for collecting information, making decisions, and achieving development results. World Bank Publications (2012)
  • McLeod, S.K.: Knowledge of need. Int. J. Philos. Stud. 19(2), 211–230 (2011)

 

Introducing Advanced Data Protection Control (ADPC)

June 14, 2021 in Announcements
ADPC

Introducing Advanced Data Protection Control (ADPC)

ADPC Logo

ADPC can fundamentally change our practice of online "consenting".

We are excited to introduce you to the Advanced Data Protection Control (ADPC).
ADPC is a proposed automated mechanism for the communication of users’ privacy decisions. It aims to empower users to protect their online choices in a human-centric, easy and enforceable manner. ADPC also supports online publishers and service providers to comply with data protection and consumer protection regulations.
You hate “cookie banners” too? ADPC would allow users to set their privacy preferences in their browser, plugin or operating system and communicate them in a simple way – limiting friction in user interaction for providers and users alike, as foreseen or panned in various innovative laws.
ADPC was developed as a part of our RESPECTeD project, a joint project with NOYB, that was led by Soheil Human and Max Schrems.
You can find more information on:
or follow ADPC-updates on: https://twitter.com/ADPC_Spec
Thank you for supporting the development of ADPC in the last years. It was not possible without many of you!
LET’S CONSTRUCT A HUMAN-CENTRIC AND SUSTAINABLE DIGITAL WORLD TOGETHER!

Enhancing Information and Consent in the Internet of Things

June 9, 2021 in Opinion
Victor Morel

Enhancing Information and Consent in the Internet of Things

Victor Morel

Victor Morel has recently joined the Sustainable Computing lab. In this blog post, he introduces the project that he has recently successfully finished, i.e. his PhD thesis.

Motivation

The introduction in 2018 of the General Data Protection Regulation (GDPR) imposes obligations to data controllers on the content of information about personal data collection and processing, and on the means of communication of this information to data subjects. This information is all the more important that it is required for consent, which is one of the legal grounds to process personal data. However, the Internet of Things can pose difficulties to implement lawful information communication and consent management. The tension between the requirements of the GDPR for information and consent and the Internet of Things cannot be easily solved, it is however possible. The goal of his thesis is to provide a solution for information communication and consent management in the Internet of Things from a technological point of view.

A generic framework for information communication and consent management

To do so, he introduced a generic framework for information communication and consent management in the Internet of Things. This framework is composed of a protocol to communicate and negotiate privacy policies, requirements to present information and interact with data subjects, and requirements over the provability of consent.

Technical options

The feasibility of this generic framework is supported with different options of implementation. The communication of information and consent through privacy policies can be implemented in two different manners: directly and indirectly. Different ways to implement the presentation of information and the provability of consent are then presented. A design space is also provided for systems designers, as a guide for choosing between the direct and the indirect implementations.

Prototype implementations

Finally, fully functioning prototypes devised to demonstrate the feasibility of the framework’s implementations are presented. The indirect implementation of the framework is illustrated as a collaborative website named Map of Things. The direct implementation combined with the agent presenting information to data subjects is sketched as a mobile application CoIoT.

Call for Participation in W3C Consent Community Group

March 5, 2021 in Announcements

Soheil Human

The concept of consent plays an essential role in the use of digital technologies as an enabler of the individual’s ownership, control, and agency. Regulations such as the GDPR assert this relationship by permitting use of consent as one of the possible legal bases for the lawful practice of data processing. Through this, obtaining consent is widely practised in the digital world, and can be perceived as an essential means to enable the individual’s agency regarding the management and ownership of their personal data. While different legal frameworks specify various requirements and obligations regarding the legal validity of consent, which should be, e.g. valid, freely given, specific, informed and active; existing and ongoing research shows that the majority of people are not empowered to practice their digital right to privacy and lawful “consenting” due to various malpractices and a lack of technological means acting in the individuals’ interest.

The W3C Consent CG (https://www.w3.org/community/consent/) aims to contribute towards the empowerment of humans concerning their rights of privacy and agency, by advocating interdisciplinary, pluralist, human-centric approaches to digital consent that are technologically and legally enforceable.

The mission of this group is to improve the experience of digital “consenting” while ensuring it remains adherent to relevant standards and laws. For this, the group will: (i) provide a space for people and stakeholders to come together (ii) highlight and analyse concepts, issues and problems about digital consenting (iii) propose and develop solutions. Some concrete areas for the working of this group are: (a) developing interdisciplinary solutions; (b) documenting and achieving legal compliance; (c) improving the user experience; and (d) utilising existing and developing new concepts and standards for digital consent.

In order to join the group, you will need a W3C account. Please note, however, that W3C Membership is not required to join a Community Group.

Global Privacy Control (GPC) + GDPR: will it work?

February 26, 2021 in Opinion

Global Privacy Control (GPC) + GDPR: will it work?

Global Privacy Controls (GPC) represents a signal to opt out of data sharing. Will it work with GDPR?

Global Privacy Control (GPC) is a boolean or binary signal sent by browsers to websites to indicate the user’s request for not sharing (or selling) their personal data with third parties. The authors (and supporters) of this specification include people from New York Times, Wesleyan University, DuckDuckGo, and Brave (with many other researchers and supporters). This makes it not a toy project, given that a big publisher, search engine, and web browser vendor is actively supporting its implementation and adoption.

Today, GPC tweeted uptake numbers into “hundreds of thousands” with inclusion by major publishers in the USA, and WordPress. GPC is legally enforceable under CCPA where it acts as the ‘opt-out’ for ‘selling’ personal data, as confirmed in a tweet by AG Becerra (California). My interest in writing this is to explore how GPC relates to the other data protection and privacy law across the Atlantic – the General Data Protection Regulation.

What is the GPC?

In essence, GPC is DNT reborn. It is a singular signal that when set or trueindicates that the user has requested the controller (the website the signal is sent to) to not share or sell their data with third parties. In essence, it is a request to stop or opt-out of sharing/selling of personal data to third parties. Given its binary or boolean nature, the GPC is simple to send, read, and evaluate. It is either set or true or it is not. The specification goes into more details regarding the HTTP requests, headers, and structure for using the signal and its interactions. It also deals with how website can indicate their support (or lack of) for abiding to the signal.

GPC data-flow

The GPC works somewhat in the following manner:

  1. I go to a website using a web browser where GPC is set to on
  2. I consent to a notice
  3. The web browser sends the GPC signal to the website (this may already have occurred before Step.2) to indicate request to opt-out
  4. Website abides by the request and stops sharing data with third parties

Legality

The GPC spec mentions that websites are responsible for conveying how the signal is going to be used or interpreted, based on their operating and applicable jurisdictions and binding regulations. Under CCPA, the GPC has teeth to be legally enforceable, and thus we have a large (and expanding) adoption across platforms. The spec also specifically mentions GDPR, and quotes the potential legal clauses it can use. I’m copying it verbatim here:

The GDPR requires that “Natural persons should have control of their own personal data” ([GDPR], Recital 7). The GPC signal is intended to convey a general request that data controllers limit the sale or sharing of the user’s personal data to other data controllers ([GDPR] Articles 7 & 21). This request is expressed with every interaction that the user agent has with the server.

Note that this request is not meant to withdraw a user’s consent to local storage as per the ePrivacy Directive (“cookie consent”) ([EPRIVACY-DIRECTIVE]) nor is it intended to object to direct marketing under legitimate interest ([GDPR]).

In addition, Robin Berjon (New York Times), one of the authors of the spec, elaborated more about workings through a debate in a Twitter thread. Paul-Oliver Dehaye (founder of PersonalData.io and of “The Great Hack” documentary fame) then quipped about possibility of using GDPR’s Code of Conduct mechanism to make GPC enforceable.

Has any EU data protection expert reviewed this? Companies have no obligation to honor a particular method chosen by the data subject to exercise their rights (unfortunately).

This being said, Art 40.2.f (Code of Conduct) does offer a chance to move in the right direction.

Others also pointed out various takes and relations to GDPR and DNT. See tweets by Nataliia Bielova regarding broader applicability to the framework of legal bases under GDPR, Ralf Bendrath discussed applicability of Article 21 of GDPR regarding right to object. Irene Kamara and Lucas shared articles (thisand this) about DNT being useful in today’s world.

What does GDPR say about consent?

GDPR has consent as a lawful basis for processing in Article 6(1-a) for personal data, and Article 9(2-a) for special categories of personal data, and others, such as data transfers, but lets focus on these broadly as ‘consent’. About withdrawal, Article 7(3) states the following:

The data subject shall have the right to withdraw his or her consent at any time. The withdrawal of consent shall not affect the lawfulness of processing based on consent before its withdrawal. Prior to giving consent, the data subject shall be informed thereof. It shall be as easy to withdraw as to give consent.

Notably, GDPR does not have ‘opt-outs’. It explicitly requires an ‘opt-in’ via consent (where it is the legal basis), and the request to stop sharing data with a third party is equivalent to withdrawing the consent for it. Under GDPR, consent for purposes and processing actions that are separate must also be given separately. That is, consent for sharing data with controller is one instance of consent, and sharing that data further with a third party should be a separate instance of consent. Recital 43 of the GDPR says:

Consent is presumed not to be freely given if it does not allow separate consent to be given to different personal data processing operations despite it being appropriate in the individual case

For inclusion, Article 21 of GDPR relates to the Right to Object. Specifically, Recital 69 says,

Where personal data might lawfully be processed because processing is necessary for the performance of a task carried out in the public interest or in the exercise of official authority vested in the controller, or on grounds of the legitimate interests of a controller or a third party, a data subject should, nevertheless, be entitled to object to the processing of any personal data relating to his or her particular situation.

Thus, if consent is the legal basis, then withdrawing should limit the sharing of data with third parties. And if legitimate interest is the legal basis, then exercising the right to object should limit it. This is (probably) what GPC mentions in its specification about applicability for GDPR.

Why I’m feeling unsure

GPC is an exciting development for me. It is the first time (for me) where people have got together, created something, managed to roll it out, and even have a law that legalises its enforcement. I’ve thought about this many times, and there are several large questions that loom out to me whenever GPC comes across. Through GPC’s own specification, and admission, its applicability and enforceability under GDPR is ambiguous at best, and non-existent at worst. Where the CCPA has provisions that can be applied directly to make request about sharing data with third parties, the GDPR does not specify any such broad restrictions, and instead relies on its framework of legal bases and rights.

This distance between legalese and real world has been a point of pain, contention, and frustration as we see no actions against large scale and systemic consent mechanisms that misuse legal basis, purposes, and are clearly falling afoul of GDPR compliance. So even a regulator weighing in on the applicability of GPC is no guarantee of its applicability because (a) there are ~50 DPAs in EU so there needs to be uniformity in interpretation, something the EDPB would be likely to be involved with, and (b) unless case law explicitly outlines that GPC is enforceable, there is always scope for someone raising objections to using it.

Even without these, the process of applying GPC is unconvincing to me, no matter how well intentioned it is. I feel that it has some weird loopholes that it does not clarify upon, and as a result, there are too many uncertainties – which in the GDPR and adtech world translate into as loopholes, exploits, and malpractices.

#1 Setting GPC off could mean share with everyone

Let us pretend that I use an GPC-enabled browser, and I visit a website that requests my consent under GDPR. My browser has probably signalled to the website or the website or its consent CMP has checked whether I use GPC. Under GDPR, consent choices should be set to a default of “no” or “off” or “prohibit”. Therefore, the interpretation of the GPC should have no effect on the default choices. However, if the GPC is set to an explicit off, then there one could argue for a case to be made to set the consent defaults to permit third-party data sharing since the individual clearly wishes it (through GPC = off).

#2 GPC vs Agree button – who wins?

Lets say I agree to sharing my data with a third party, knowingly, and intentionally, by using the choices in the consent dialogue. Now I have indicated my wishes but the GPC signal indicates otherwise. What should a website / controller do in such a situation where the user’s consent is in conflict with an automatic signal? I would presume that a rational decision would be to respect the user’s choice over the user’s automatic agent’s choice. And this here is a subtle avenue for manipulation, where as long as individuals continue to click on the Agree and Accept All buttons, the GPC could be argued to have been overridden by the user’s choices. For proponents of imbalanced consent requests, I’m speaking about hypothetical scenarios where the choices and interactions are actually valid.

Where GPC does benefit is when the consent dialogue is malicious and abusive. In such cases, we want the GPC to enforce a right to withdraw or object despite us having clicked on Agree to All. This also forms the elevator pitch for adopting GPC: “don’t worry, click on the agree buttons, we’ll send a withdraw request right along with it”. So which method should we go with? Should GPC override the consent choices or vice-versa? I imagine this is a chicken and egg problem (though the egg definitely came first because evolution).

A more generous interpretation and argument is that CMP vendors or providers would somehow integrate the GPC into the choices. This is a fallacy as long as the Accept All button exists – because along with it, the dilemma above also exists. In wonderland, the CMP would actually respect the GPC signal and turn off the sharing choices no matter what agree button you choose., or make you set them explicitly to affirm your choices.

#3 Tiny windows of opportunities and leaky pipelines

The crux of the issues for consent online stem from the mess that is the adtech ecosystem consisting of data sharing with thousands of websites, real-time bidding, and impossible demands of informed choices, all built on the backbone that is IAB TCF ‘framework’. In this, the moment you hit Agree, a signal is sent out to all controllers along with all of the data you consented to. Let us imagine this is what really happens for a moment. You click Agree and your personal data is sent to all of the thousands of third parties mentioned in the dialogue. Now, my browser also sends a GPC signal. Who receives it?

If the GPC is used by the CMP to block data being sent to the third parties, then we’re back at the problem in #2. If all the third parties receive the GPC signal, what are they supposed to do, and will they do it? What if the third parties claim that they will respect the GPC signal, but it will take time to process and implement? That leaves a tiny window of opportunity, where that third party has the personal data and my consent to process it for their desired purpose. In this case, GPC probably only restricts continued processing.

To think further along these lines, how will I know whether a third party has actually respected my GPC signal or my consent or both or neither? There is no requirement to confirm withdrawal of consent, and since GPC is automatic, one can presume there could be an automatic signal sent back in acknowledgement. But who is keeping track, where, and how? If the IAB decides to include the GPC signal in a future update to the TCF, will it make it mandatory to check the GPC for all consent interactions (nothing else will work)? Even if the answer is yes, we are still going to be sharing data with a third party. Thus, we have leaky pipelines of data that look like they might be respecting the GPC but could actually be malicious actors or claim innocence under the guise of technical naughtiness.

#4 Which of my consents does GPC represent?

GPC is singular, i.e. there is only one GPC signal AFAIK sent by the browser. There is no way to say, or associate the GPC with a particular consent. So will the GPC blanket withdraw or object everything and everywhere? What if I have given consent to A as a third party, but don’t want to give to B? In this case, will GPC request revocation to both? I know that GPC can be indicated per website, and can be checked per website when giving consent (I think, as per the specification and assumption that CMP takes it into account). But then there is an uncertainty as to whether my consent still applies or has been withdrawn by the GPC. Further, if controllers silently accept (or worse, ignore) the GPC – how do I keep track of what impact that automatic signal is having, and on which of my consents.

Lots of promise, Lots of worries

My nightmare is the GPC having a global and wide adoption, and then being abused for loopholes all around. It is likely to happen, because, common, look at any random website to see what we live with. So why don’t we take time to think this through, and find these weird cases, discuss it, and close them as and how we can. This blog post is a think-aloud type of draft I’ve just written for the sake of thinking about GPC. I intend to study it more, think about it in terms of GDPR, and then perhaps update this article as I come across new information and consequences.

CfP: Human-centricity in a Sustainable Digital Economy

February 23, 2021 in Announcements

CALL FOR PAPER

The 55th Hawaii International Conference on System Sciences Mini-track on

Human-centricity in a Sustainable Digital Economy

http://hicss.hawaii.edu | HICSS-55 | January 4 – 7, 2022 | Hyatt Regency Maui, Hawaii, USA
Paper Submission Deadline: June 15, 2021 | 11:59 pm HST

The internet and the global digital transformation have changed many different aspects of our lives. Not only the economies and the societies but also people’s personal lives have been influenced by this new and ever-emerging era of our history. While the digital age has made it possible to provide novel services and solutions for the end-users, it has also caused serious concerns in different individual and societal levels, such as issues regarding online privacy, algorithmic bias, fairness and accountability of information systems, transparency, governance, and explainability of information systems, end-user manipulations, fake news, traceability, etc. The development of human-centric and end-user empowering information systems can be one approach towards “digital sustainability” since they enable individuals to influence how their data is used, by whom, and for which purpose. Many novel and personalized services are emerging in this direction, which make the digital economy sustainable, i.e. a positive place that focuses on human users. 

This minitrack aims to attract research that advances the understanding of human-centricity and end-user empowerment in a sustainable digital economy. As the transformation is multidimensional in nature, the minitrack adopts an interdisciplinary perspective, which considers human-centricity and end-user empowerment across application domains (e.g. software development, digital commerce, healthcare, administration, mobile apps, social media, and online services) and disciplines (e.g. economics, computer science, sociology). Among the relevant topics are:

  • Characteristics and design of sustainable human-centric information systems
  • Evaluation of existing information systems from a human-centric perspective
  • Co-creation and co-production of human-centric sustainable information systems
  • Analysis and design of technologies (e.g. AI, Blockchain) that empower end-users
  • Design of human-centric end-user agents, AI and machine learning
  • Fairness, transparency, accountability and controllability of information systems
  • Legal or economic aspects of human-centricity in information systems
  • Identity, privacy and consent management systems
  • Business value of human-centric and/or user empowered solutions
  • Sociotechnical studies of human-centricity in information systems
  • Opportunities and challenges of digital behavior change, habit formation, and digital addiction
  • Digital nudging for increasing social or ecological responsibilities
  • Ethical concerns regarding human-centricity and/or sustainability
  • COVID-19’s impact on human-centricity or sustainability of information systems

Publication of Papers

HICSS is the #1 Information Systems conference in terms of citations as recorded by Google Scholar. Presented papers will be included in the Proceedings of HICSS-55. Selected papers will be invited for a fast-track in Electronic Markets – The International Journal on Networked Business.

A Special Issue on “Human-centricity in a Sustainable Digital Economy” at Electronic Markets is planned.

Important Dates

June 15, 2021 | 11:59 pm HST: Paper Submission Deadline

August 17, 2021: Notification of Acceptance/Rejection

September 22, 2021: Deadline for Authors to Submit Final Manuscript for Publication

October 1, 2021: Deadline for at least one author of each paper to register for HICSS-55

January 4 – 7, 2022: Paper Presentations

Organizers

Soheil Human, Vienna University of Economics and Business (WU Wien), Austria

Gustaf Neumann, Vienna University of Economics and Business (WU Wien), Austria

Rainer Alt, Leipzig University, Germany

About the HICSS Conference

Since 1968, the Hawaii International Conference on System Sciences (HICSS) has been known worldwide as the longest-standing working scientific conferences in Information Technology Management. HICSS provides a highly interactive working environment for top scholars from academia and the industry from over 60 countries to exchange ideas in various areas of information, computer, and system sciences.

According to Microsoft Academic, HICSS ranks the 36th in terms of citations among 4,444 conferences in all fields worldwide. The Australian Government’s Excellence in Research project (ERA) has given HICSS an “A” rating, one of 32 Information Systems conferences so honored out of 241 (46-B and 146-C ratings). Data supplied by the Australian Research Council, December 2009.

Unique characteristics of the conference include:

  • A matrix structure of tracks and minitracks that enables research on a rich mixture of cutting-edge computer-based applications and technologies.
  • Three days presentations of peer-reviewed papers and discussions in a workshop setting that promotes interaction leading to revised and extended papers that are published in journals, books, and special issues as well as additional research.
  • A full day of Symposia, Workshops, and Tutorials.
  • Keynote addresses and distinguished lectures which explore particularly relevant
  • topics and concepts.
  • Best Paper Awards in each track which recognize superior research performance.
  • HICSS is the #1 IS conference in terms of citations as recorded by Google Scholar.
  • A doctoral consortium that helps participants work with senior scholars on their
  • work-in-progress leading to journal publications.
  • HICSS panels that help shape future research directions.

Author Instructions

  • http://hicss.hawaii.edu/authors/

CfP: Special Issue on Accountability Mechanisms in Socio-Technical Systems in the Journal of Responsible Technology

February 8, 2021 in Uncategorized
Journal of Responsible Technology

Journal of Responsible Technology

There are growing demands for greater accountability of socio-technical systems. While there is broader research on a range of issues relating to accountability such as transparency or responsibility, more concrete proposals for developing accountability mechanisms that reflect the socio-technical nature of information systems are less discussed. A key challenge in existing research is how to imagine information systems which promote accountability. While this is part of the wider debate on fairness accountability and transparency principles in the FAccT community and around explainability and bias in artificial intelligence, more concrete proposals for developing socio-technical accountability mechanisms are seldom discussed in detail.

This need for accountability should be reflected both at technical levels, as well as in the socio-technical embeddedness of the systems being developed. By trying to specifically isolate the accountability mechanism within socio-technical systems, we believe it is possible to systematically identify and compare such mechanisms within different systems, as well as push for a debate about the effectiveness of such mechanisms.

This special issue focuses on the mechanisms for tackling issues of accountability in socio-technical systems. The goal is to provide a forum for proposing, describing and evaluating specific accountability mechanisms; exploring the challenges of transforming more abstract notions of accountability into practical implementations; for critical perspectives on different accountability approaches; highlighting the successes as well as challenges from practical use-cases; and so forth.

Recognizing that the challenges are socio-technical, we solicit papers from a range of disciplines. Given the practical focus of this special issue, we specifically encourage papers that discuss accountability from a technical, organisational, legal or STS perspective. We see this special issue as a way to close these gaps by engaging with the existing debate on accountability.

Potential areas of interest for submissions include, but are not limited to:

– user cognition and human behaviour in relation to the design of interfaces that promote accountability
– increasing the accountability of automated decision-making systems and decision-support systems
– ensuring accountability in public sector systems
– perspectives to accountability in the context of real-world technologies
– contributions that bring together technical and non-technical perspectives
– critical examinations of existing accountability technologies and mechanisms aimed at gaining new insights about their socio-technical characteristics and implications.

In all these and further areas, accountability in socio-technical systems needs to be addressed more systematically. The concrete implementation of such accountability mechanisms has so far received only limited attention. Similarly, the challenges arising during such transformations of abstract accountability concepts into concrete implementations as well as the critical evaluation of respective implementations are only rarely covered by existing research. We see this special issue as a way to close these gaps by engaging with the existing debate on accountability.

Submission Guidelines

CfP: https://www.journals.elsevier.com/journal-of-responsible-technology/call-for-papers/accountability-mechanisms-in-socio-technical-systems

Authors should follow the Journal guidelines for paper submission. Full details are available here: https://www.elsevier.com/journals/journal-of-responsible-technology/2666-6596/guide-for-authors

Submissions must be made through the Editorial Manager submissions system via the following link: https://www.editorialmanager.com/jrtech/default.aspx

For questions about special issue submissions or the review process, please don’t hesitate to contact the Guest Editors here: b.wagner@tudelft.nl

Relevant Dates

– Submissions open from 1 February 2021

– Submissions due by 30 June 2021

Guest Editors
– Ben Wagner, TU Delft
– Jat Singh, University of Cambridge
– Frank Pallas, TU-Berlin
– Florian Cech, TU Vienna
– Soheil Human, WU Vienna

User – Quo vadis?

January 19, 2021 in Lab updates

Marie Therese Sekwenz

“The perfect map is just an illusion” – This statement visualises the problem occurring in conjunction with mapping anything. Therefore, a series of expert talks were held under the leading title Perceiving Time/Space through Apps: Human-centric Digital Transformation in GIS (geographic information systems). This Sustainable Computing Lab and MyData Hub Austria Meetup #6 event tried to shed light on aspects of mobility, as well as useability and bias. The online meeting was attended by over 50 participants around the world. 

The recently published article by Soheil Human, Till Winkler and Ben Wagner takes a closer look at one of the most frequently used maps and its recommendation systems – Google Maps. “Technology shapes our world and behaviour” according to Ben Wagner, who held the first talk. The authors argue that the technology of Google uses a “one-solution-fits-all-users-approach” while the representation needed is not only user- but also context-dependent. Nevertheless, Google Maps supports a design of its recommendation to users for routing decisions, that inherently shows biased Maps to its users, because of its different assumptions in the comparison field of options, instead of using visualisation options that give the user a better understanding of each recommendation or option. To give an example from the aforementioned paper, the traveling time presented to the user might not take into account that the user is not already seated in the car, but rather has to walk to the vehicle. Another assumption of Google Maps is the absence of a representation for time consumed by looking for a parking space at the end of the rout. Therefore, in Google’s perfect world you will find a parking space wherever you desire it – also in the city centre. This representation can be seen as a constant nudging of the user that makes them slowly more likely to prefer the travelling option car over the option of public transport. While projects related to mobility are very costly, Stefanie Peer further explained the cost-benefit-analysis aspects of such mobility related questions. Here, the benefits can be understood as travel-time-gains or improvements in terms of comfort. 

These benefits have to be monetized in order to calculate the “willingness of people to pay for shorter travelling times”. Stefanie Peer also stresses that this ‘objective‘ travel time can only be seen as a measure and might therefore differ for every user according to their circumstances and travel-needs. Robert Braun tries to answer rather philosophical questions related to automobility such as, “What can be understood under the term Automobilty?” or “To what degree is it just a social construct? Are we talking about euclidean space or is it the semiotic space? is it the representation? Is this geographical reality or is it a produced reality after all?”. 

These questions around data, reality and representation seem to be a key challenge of the today. Furthermore, this discourse is political and coloured though data sovereignty in Europe. While Europe has used digital infrastructure form America or China, the trend leads towards a direction that desires to establish a concept known as the European Public Sphere. This paradigm shift can also be related to the Society-of-Things according to Robert Braun, that describes not only an IoT (Internet of Things) understanding of the world, but further uses its principles of overall connectivity for the entire society.

On the other hand, Robert Braun brought up the debate about data-accessibility and data-ownership. Currently data often is stored in data silos that only grants few people access to it and cannot be used additionally e.g. for research purposes. This is why Robert Braun wants to actively contribute to creating a data economy with the user and data-creator in the center.

Martin Semberger form the Austrian Federal Ministry for Digital and Economic Affairs (BMDW) argues that the state is an important actor within the field of data in general and mobility data as a specific. Martin Semberger is an expert on European digital-single-market topics like the reuse of public information, open data and public sector information. Martin Semberger also stresses the importance of legally addressing the digital economy, where he describes the General Data Protection Regulation (GDPR) as a “global benchmark”. 

The directive on open data and the re-use of public sector information Martin Semberger has worked on, wants to lead the way towards an open data to society.

The directive therefore creates “Ways and means on how we can compare [options] to make better use of data”

While we live in a marginal-cost-society in the digital world, Martin Semberger argues that 

the general principle in Europe is that all publicly financed data should be openly available in order to boost the potential for creativity for innovation and for the economy.

The directive provides for provisions that account for e.g. standard licenses (here Creative Commons Licenses should be given an advantage), transparency conditions, requests, non-discrimination clauses etc. Through this a more democratic playing field should be settled within the European Single Market. Martin Semberger also mentions the European Data Governance Act, that takes the secure sharing of data into consideration. 

The last presentation centered around sustainable automobility. Florian Daniel – the innovation manager of Carployee described their innovative carpooling App. Carployee can be used by companies for reducing the overall company CO2-footprint and brings together effective solutions for driver and the chauffeured user. Through gamification and HR-related incentives Carployee tries to make carpooling flexible and comfortable. In this use case data is used to schedule routes and nudge the user towards a more sustainable behavior in their mobility decisions. An example of nudging technology that promotes climate goals in comparison to Google Maps.

Open Letter to Facebook, Twitter, and YouTube: Stop silencing critical voices from the Middle East and North Africa

December 19, 2020 in Opinion

Ten years ago today, 26-year old Tunisian street vendor Mohamed Bouazizi set himself on fire in protest over injustice and state marginalization, igniting mass uprisings in Tunisia, Egypt, and other countries across the Middle East and North Africa.

As we mark the 10th anniversary of the Arab Spring, we, the undersigned activists, journalists, and human rights organizations, have come together to voice our frustration and dismay at how platform policies and content moderation procedures all too often lead to the silencing and erasure of critical voices from marginalized and oppressed communities across the Middle East and North Africa.

The Arab Spring is historic for many reasons, and one of its outstanding legacies is how activists and citizens have used social media to push for political change and social justice, cementing the internet as an essential enabler of human rights in the digital age.   

Social media companies boast of the role they play in connecting people. As Mark Zuckerberg famously wrote in his 2012 Founder’s Letter

“By giving people the power to share, we are starting to see people make their voices heard on a different scale from what has historically been possible. These voices will increase in number and volume. They cannot be ignored. Over time, we expect governments will become more responsive to issues and concerns raised directly by all their people rather than through intermediaries controlled by a select few.”

Zuckerberg’s prediction was wrong. Instead, more governments around the world have chosen authoritarianism, and platforms have contributed to their repression by making deals with oppressive heads of state; opening doors to dictators; and censoring key activists, journalists, and other changemakers throughout the Middle East and North Africa, sometimes at the behest of other governments:

  • Tunisia: In June 2020, Facebook permanently disabled more than 60 accounts of Tunisian activists, journalists, and musicians on scant evidence. While many were reinstated, thanks to the quick reaction from civil society groups, accounts of Tunisian artists and musicians still have not been restored. We sent a coalition letter to Facebook on the matter but we did not receive a public response. 
  • Syria: In early 2020, Syrian activists launched a campaign to denounce Facebook’s decision to take down/disable thousands of anti-Assad accounts and pages that documented war crimes since 2011, under the pretext of removing terrorist content. Despite the appeal, a number of those accounts remain suspended. Similarly, Syrians have documented how YouTube is literally erasing their history
  • Palestine: Palestinian activists and social media users have been campaigning since 2016 to raise awareness around social media companies’ censorial practices. In May 2020, at least 52 Facebook accounts of Palestinian activists and journalists were suspended, and more have since been restricted. Twitter suspended the account of a verified media agency, Quds News Network, reportedly on suspicion that the agency was linked to terrorist groups. Requests to Twitter to look into the matter have gone unanswered. Palestinian social media users have also expressed concern numerous times about discriminatory platform policies.
  • Egypt: In early October 2019, Twitter suspended en masse the accounts of Egyptian dissidents living in Egypt and across the diaspora, directly following the eruption of anti-Sisi protests in Egypt. Twitter suspended the account of one activist with over 350,000 followers in December 2017, and the account still has yet to be restored. The same activist’s Facebook account was also suspended in November 2017 and restored only after international intervention. YouTube removed his account earlier in 2007.

Examples such as these are far too numerous, and they contribute to the widely shared perception among activists and users in MENA and the Global South that these platforms do not care about them, and often fail to protect human rights defenders when concerns are raised.  

Arbitrary and non-transparent account suspension and removal of political and dissenting speech has become so frequent and systematic that they cannot be dismissed as isolated incidents or the result of transitory errors in automated decision-making. 

While Facebook and Twitter can be swift in responding to public outcry from activists or private advocacy by human rights organizations (particularly in the United States and Europe), in most cases responses to advocates in the MENA region leave much to be desired. End-users are frequently not informed of which rule they violated, and are not provided a means to appeal to a human moderator. 

Remedy and redress should not be a privilege reserved for those who have access to power or can make their voices heard. The status quo cannot continue. 

The MENA region has one of the world’s worst records on freedom of expression, and social media remains critical for helping people connect, organize, and document human rights violations and abuses. 

We urge you to not be complicit in censorship and erasure of oppressed communities’ narratives and histories, and we ask you to implement the following measures to ensure that users across the region are treated fairly and are able to express themselves freely:

  • Do not engage in arbitrary or unfair discrimination. Actively engage with local users, activists, human rights experts, academics, and civil society from the MENA region to review grievances. Regional political, social, cultural context(s) and nuances must be factored in when implementing, developing, and revising policies, products, and services.
  • Invest in the necessary local and regional expertise to develop and implement context-based content moderation decisions aligned with human rights frameworks in the MENA region.  A bare minimum would be to hire content moderators who understand the various and diverse dialects and spoken Arabic in the twenty-two Arab states. Those moderators should be provided with the support they need to do their job safely, healthily, and in consultation with their peers, including senior management.
  • Pay special attention to cases arising from war and conflict zones to ensure content moderation decisions do not unfairly target marginalized communities. For example, documentation of human rights abuses and violations is a legitimate activity distinct from disseminating or glorifying terrorist or extremist content. As noted in a recent letter to the Global Internet Forum to Counter Terrorism, more transparency is needed regarding definitions and moderation of terrorist and violent extremist (TVEC) content.
  • Preserve restricted content related to cases arising from war and conflict zones that Facebook makes unavailable, as it could serve as evidence for victims and organizations seeking to hold perpetrators accountable. Ensure that such content is made available to international and national judicial authorities without undue delay.
  • Public apologies for technical errors are not sufficient when erroneous content moderation decisions are not changed. Companies must provide greater transparency, notice, and offer meaningful and timely appeals for users. The Santa Clara Principles on Transparency and Accountability in Content Moderation, which Facebook, Twitter, and YouTube endorsed in 2019, offer a baseline set of guidelines that must be immediately implemented. 

Signed by,

Access Now

Arabic Network for Human Rights Information (ANHRI)

Article 19

Association for Progressive Communications (APC)

Association Tunisienne de Prévention Positive

Avaaz 

Cairo Institute for Human Rights Studies (CIHRS)

The Computational Propaganda Project

Daaarb — News — website

Egyptian Initiative for Personal Rights

Electronic Frontier Foundation

Euro-Mediterranean Human Rights Monitor 

Global Voices

Gulf Centre for Human Rights, GC4HR

Hossam el-Hamalawy, journalist and member of the Egyptian Revolutionary Socialists  Organization

Humena for Human Rights and Civic Engagement 

IFEX

Ilam- Media Center For Arab Palestinians In Israel

ImpACT International for Human Rights Policies

Initiative Mawjoudin pour l’égalité

Iraqi Network for Social Media – INSMnetwork

I WATCH Organisation (Transparency International Tunisia)

Khaled Elbalshy, Editor in Chief, Daaarb website

Mahmoud Ghazayel,  Independent

Marlena Wisniak, European Center for Not-for-Profit Law

Masaar — Technology and Law Community

Michael Karanicolas, Wikimedia/Yale Law School Initiative on Intermediaries and Information

Mohamed Suliman, Internet activist

My.Kali magazine — Middle East and North Africa

Palestine Digital Rights Coalition, PDRC

The Palestine Institute for Public Diplomacy 

Pen Iraq

Quds News Network

Ranking Digital Rights 

Dr. Rasha Abdulla, Professor, The American University in Cairo

Rima Sghaier, Independent 

Sada Social Center

Skyline International for Human Rights

SMEX

Soheil Human, Vienna University of Economics and Business / Sustainable Computing Lab

The Sustainable Computing Lab

Syrian Center for Media and Freedom of Expression (SCM)

The Tahrir Institute for Middle East Policy (TIMEP)

Taraaz

Temi Lasade-Anderson, Digital Action

Vigilance Association for Democracy and the Civic State — Tunisia

WITNESS

7amleh — The Arab Center for the Advancement of Social Media

Originally published at: https://www.accessnow.org/facebook-twitter-youtube-stop-silencing-critical-voices-mena/

Predicting Human Lives? New Regulations for AI Systems in Europe

November 11, 2020 in Opinion

Predicting Human Lives?

New Regulations for AI systems in Europe

November, 2020 in Opinion

Rita Gsenger

“Humanity’s story had been improvised, now it was planned, years in advance, for a time the sun and moon aligned, we brought order from chaos.” (Serac, Westworld, Season 3, Episode 5, HBO)

Rehoboam–named after the first king of Judah, said to be the wisest of all human beings was the successor of King Solomon–is predicting human lives and dictating what individuals will do without them knowing in the HBO adaption of “Westworld”. The giant quantum computer looks like a black globe covered in red, flickering lights and it is placed in the entrance of its owner company, where the public and school children can look at it, visit it, to see that it is not such a dangerous mythical creature after all. Nobody, except for its creators understands how the system works and it structures and shapes society, controlling its own system as well. Rehoboam analyses millions of files of individuals, predicting the course of their life, including their precise time of death. The citizens of this world do not know that their lives are shaped and controlled by the predictions of an AI system, which aims to establish and maintain order in society. A society which was bound to destroy itself and had been saved by a god created by a human.

Not unlike in contemporary science fiction, the increasing use and deployment of AI technologies is influencing not only our online but more and more our offline lives as well. The unrest and resistance against measures fighting the covid-19 pandemic have shown that online mobilisation and the use of algorithms that push content that has been shared, viewed and liked the most, can result in difficult situations with consequences for entire societies, bringing insecurities, distrust and fears to the surface and into the actions of human beings. Thus, the conversation was shifted to newly strengthened science skepticism and conspiracy theories with real life consequences, making the need for human-centric AI systems and digital literacy more palpable.

Algorithmic decision-making systems play an increasingly important role in the landscape of various AI technologies as they are often used to make decisions and fulfill roles that were previously done by human beings for instance in employment, credit scoring, education and sentencing. Predictive policing and predicting recidivism rates are to some extent done by AI systems in all the states of the US. Many countries in Europe are adopting various ADM technologies, also to fight the covid-19 pandemic, as summarised by AlgorithmWatch.

The European Commission and especially DG Connect is currently drafting various pieces of legislation for the end of this year and next year such as the Digital Service Act regulating social media. These are following the broader themes and priorities as outlines in the White Paper on AI and the European Data Strategy. In a leaked position paper countries such as Sweden, France and Denmark are advocating for a softer approach relying on recommendations and advocating for a voluntary labelling scheme of AI systems to increase visibility for European citizens, businesses and administration in order to enable them to make an ethical choice. According to the position paper that would provide incentives for the companies to go beyond the law to establish trustworthy AI solutions on the one hand and give them competitive advantages on the other. Germany however would prefer a stricter approach on legislating AI technologies as trustworthiness in AI systems would be established by the legislation. 

In a recent online event on AI and racism EC representatives discussed AI and structural racism, participants agreed that biased data are a problem when employing these technologies and racialized communities are adversely affected by these technologies, for instance a cross national database with an automated exchange of information on criminal offenses (including DNA information following the Prüm decision introducing facial recognition data  was expressed to be concerning by Sarah Chander, a senior policy advisor for European digital rights. The database is criticized due to the possibilities of false positives, which might lead to investigations going into the wrong direction. Anthony Whelan, a digital policy advisor and part of the Cabinet of President von der Leyen however did not deem it controversial to use facial recognition data to identify individuals, he does acknowledge however that training data needs to be free of bias and sufficiently representative.

How far the proposed legislation by the European Commission will go and how it will address the many issues raised by these technologies is unclear. One might hope that European values and the privacy of citizens and any other concerned parties will be respected and are crucial for the guidance of these debates, so a society governed by Rehoboam remains a dystopian fiction.