Advancing AI Research: An Overview of the Sustainable Computing Lab’s Initiatives

January 18, 2024 in Announcements, Lab updates

Advancing AI Research: An Overview of the Sustainable Computing Lab’s Initiatives

Artificial Intelligence (AI) is increasingly recognized as a pivotal force in the digital transformation era, profoundly affecting personal, business, and societal realms. The extent of AI’s influence is vast, reshaping lifestyles, work environments, and even the underpinnings of democratic systems. In this context, the Sustainable Computing Lab has positioned itself at the forefront of AI research, aligning its genesis and evolution with the exploration of AI’s multifaceted impact.

The lab’s establishment was motivated by the ambition to delve into AI research, fostering a community committed to advancing Sustainable, Human-compatible, Lawful, Accountable, and Ethical digital technologies. This focus has not only shaped the lab’s research trajectory but also guided its organization of events and initiatives aimed at amplifying the positive impact of digital technologies. Recognition from prestigious bodies, such as the Artificial Intelligence Award by the Internet Foundation Austria, underscores the significance and visibility of the lab’s contributions in this domain.

Presently, the lab is engaged in several cutting-edge AI research projects, collaborating with esteemed institutions and tackling diverse topics within the AI sphere. These include:

  1. Sustainable HALE AI: Studying the co-construction of AI technologies that are sustainable, human-compatible, accountable, lawful, and ethical.
  2. Human-compatible, value-aware, and needs-aware AI (in collaboration with George Washington University): Investigating the integration of needs and values in AI systems and building human-compatible AI.
  3. Applied AI ethics: Examining how background and context influence judgments regarding AI use in military setups.
  4. Offensive AI (in collaboration with Liechtenstein University): Understanding the use of AI for offensive purposes, the state of knowledge, and mitigation strategies.
  5. Generative AI in digital transformation: Investigating the application of generative AI models, especially large language models (LLMs), in organizational digital transformation.
  6. Assisting digital protection through generative AI: Studying how generative AI can be used to develop assistant systems for enhancing personal digital protection.

These research directions and projects exemplify the lab’s commitment to exploring and shaping the multifaceted impact of AI on society, aligning with its foundational goals of promoting sustainability, human compatibility, lawfulness, accountability, and ethics in digital technologies. Through these endeavors, the Sustainable Computing Lab continues to contribute significantly to the discourse and development of AI, reinforcing its status as a key player in the field.

CFP: Human-centricity in a Sustainable Digital Economy | HICSS-57

March 29, 2023 in Announcements

CALL FOR PAPERS

The 57th Hawaii International Conference on System Sciences Mini-track on

Human-centricity in a Sustainable Digital Economy

http://hicss.hawaii.edu | HICSS-57
Hilton Hawaiian Village Waikiki Beach Resort | January 3-6, 2024

 

Paper Submission Deadline: June 15, 2023 | 11:59 pm HST

The global digital transformation has changed many different aspects of our lives. Not only the economies and the societies, but also people’s personal lives, have been influenced by this new and ever-emerging era of our history. While the digital age has made it possible to provide novel services and solutions for the end-users, it has also caused serious concerns in different individual and societal levels, such as issues regarding online privacy, algorithmic bias, fairness and accountability of information systems, transparency, governance, and explainability of information systems, end-user manipulations, fake news, traceability, etc. The development of human-centric and end-user empowering information systems can be one approach towards “digital sustainability” since they enable individuals to influence how their data is used, by whom, and for which purpose. Many novel and personalized services are emerging in this direction, which make the digital economy sustainable, i.e. a positive place that focuses on human users.

This minitrack aims to attract research that advances the understanding of human-centricity and end-user empowerment in a sustainable digital economy. As the transformation is multidimensional in nature, the minitrack adopts an interdisciplinary perspective, which considers human-centricity and end-user empowerment across application domains (e.g., software development, digital commerce, healthcare, administration, mobile apps, social media, and online services) and disciplines (e.g., economics, ecology, computer science, sociology). Among the relevant topics are:

  • Characteristics and design of sustainable human-centric information systems
  • Evaluation of information systems from a human-centric perspective
  • Co-creation and co-production of human-centric sustainable information systems
  • Analysis and design of technologies (e.g., AI, blockchain) that empower end-users
  • Design of human-centric end-user agents, chatbots, AI and machine learning
  • Identity, privacy and consent management systems (e.g., self-sovereign identities) 
  • Fairness, transparency, accountability and controllability of information systems
  • Legal, social, ethical, political or economic aspects of human-centricity in information systems
  • Business value of human-centric and/or user empowered solutions
  • Human-centric aspects of digital nudging
  • The role of platforms in digital sustainability 
  • Human-centricity and sustainability in platform economy, shared economy, circular economy, and digital economy
  • Study of gaps, barriers, enablers, drivers, and concerns related to human-centricity and sustainability in digital systems, ecosystems, and environments
  • Ubiquitous, pervasive, and/or ambient human-centricity in digital environments
  • Study of human’s perception, experience, or interactions in digital environments
  • COVID-19’s impact on human-centricity or sustainability of information systems
  • Emerging AI systems for automated decision-making and text generation (such as ChatGPT) and their impact on human-centricity
  • Human-centricity in cyber-physical/metaverse spaces
  • Human-centricity and data management
  • Human-centricity and science, such as citizen science or digital transformation in science and knowledge production or education
  • Approaches affiliated with human-centricity, such as Social Welfare Computing, Life Engineering, Digital Humanism, Digital Sustainability, Human Awareness

Publication of Papers:

HICSS is the #1 Information Systems conference in terms of citations as recorded by Google Scholar. Presented papers will be included in the Proceedings of HICSS-57. Selected papers will be invited for a fast-track special issue in Electronic Markets – The International Journal on Networked Business.

Important Dates for Paper Submission
Submission DeadlineJune 15, 2023 | 11:59 pm HST

Notification of Acceptance/RejectionAugust 17, 2023 | 11:59 pm HST
Deadline for Submission of Final Manuscript for PublicationSeptember 22, 2023|11:59 pm HST
Deadline for at least one author to register for HICSS-57October 1, 2023 | 11:59 pm HST

Conference Location/Dates:

Hilton Hawaiian Village Waikiki Beach Resort | January 3-6, 2024

 

Organizers:

Soheil Human, Vienna University of Economics and Business (WU Wien), Austria

Gustaf Neumann, Vienna University of Economics and Business (WU Wien), Austria

Rainer Alt, Leipzig University, Germany

About the HICSS Conference:

Since 1968, the Hawaii International Conference on System Sciences (HICSS) has been known worldwide as the longest-standing working scientific conferences in Information Technology Management. HICSS provides a highly interactive working environment for top scholars from academia and the industry from over 60 countries to exchange ideas in various areas of information, computer, and system sciences. 

Unique characteristics of the conference include:

  • A matrix structure of tracks and minitracks that enables research on a rich mixture of cutting-edge computer-based applications and technologies.
  • Four days presentations of peer-reviewed papers and discussions in a workshop setting that promotes interaction leading to revised and extended papers that are published in journals, books, and special issues as well as additional research.
  • Parallel Symposia, Workshops, and Tutorials.
  • Keynote addresses and distinguished lectures which explore particularly relevant topics and concepts.
  • Best Paper Awards in each track which recognize superior research performance.
  • HICSS is the #1 IS conference in terms of citations as recorded by Google Scholar.
  • A doctoral consortium that helps participants work with senior scholars on their work-in-progress leading to journal publications.
  • HICSS panels that help shape future research directions.

Author Instructions:

http://hicss.hawaii.edu/authors/ 

Mini-track Link:

https://hicss.hawaii.edu/tracks-57/internet-and-the-digital-economy/#human-centricity-in-a-sustainable-digital-economy-minitrack

A Call for Interdisciplinary Collaboration toward the Realization of Needs-aware AI

June 28, 2022 in Opinion
Needs-aware AI

A Call for Interdisciplinary Collaboration toward the Realization of Needs-aware AI

Soheil Human
soheil.human@wu.ac.at

Published: 28.06.2022

Needs-aware AI

Need is one of the most fundamental constructs connected to different dimensions of Human-awareness, Accountability, Lawfulness, and Ethicality (HALE) of sociotechnical systems. This construct, however, has not been well considered in the design, development, evaluation and sustaining of the AI-based sociotechnical systems. In our new article [1], we call for the realization of “Needs-aware AI” through interdisciplinary collaborations.

Footnotes: 

[1] The article can be currently accessed here: https://rdcu.be/cQvQu; the permanent link is: https://doi.org/10.1007/s43681-022-00181-5.

Bibliography:

  • Watkins, R., Human, S. Needs-aware artificial intelligence: AI that ‘serves [human] needs’. AI Ethics (2022). https://doi.org/10.1007/s43681-022-00181-5
  • Lee, K.-F., OReilly, T.: Meet the Expert: How AI Will Change Our World by 2041. OReilly Media, Inc. (2021).
  • Shneiderman, B.: Design lessons from ai’s two grand goals: human emulation and useful applications. IEEE Trans. Technol. Soc. 1(2), 73–82 (2020).
  • Shneiderman, B.: Human-Centered AI. Oxford University Press, Oxford (2022).
  • Human, S., Fahrenbach, F., Kragulj, F., Savenkov, V.: Ontology for representing human needs. In: Różewski, P., Lange, C. (eds.) Knowledge engineering and semantic web communications in computer and information science, pp. 195–210. Springer International Publishing, Cham (2017)
  • OECD Report: Regulation, alternatives traditional. https://www.oecd.org/gov/regulatory-policy/42245468.pdf. Accessed 18 May 2022
  • Human, S., Gsenger, R., Neumann, G.: End-user empowerment: An interdisciplinary perspective. In: Proceedings of the 53rd Hawaii International Conference on System Sciences, Hawaii, United States, pp. 4102–4111 (2020)
  • Human, S., Watkins, R.: Needs and Artificial Intelligence. arXiv (arXiv:2202.04977[cs.AI]) (2022). https://doi.org/10.48550/arXiv.2202.04977
  • Watkins, R., Meiers, M.W., Visser, Y.: A guide to assessing needs: essential tools for collecting information, making decisions, and achieving development results. World Bank Publications (2012)
  • McLeod, S.K.: Knowledge of need. Int. J. Philos. Stud. 19(2), 211–230 (2011)

 

Introducing Advanced Data Protection Control (ADPC)

June 14, 2021 in Announcements
ADPC

Introducing Advanced Data Protection Control (ADPC)

ADPC Logo

ADPC can fundamentally change our practice of online "consenting".

We are excited to introduce you to the Advanced Data Protection Control (ADPC).
ADPC is a proposed automated mechanism for the communication of users’ privacy decisions. It aims to empower users to protect their online choices in a human-centric, easy and enforceable manner. ADPC also supports online publishers and service providers to comply with data protection and consumer protection regulations.
You hate “cookie banners” too? ADPC would allow users to set their privacy preferences in their browser, plugin or operating system and communicate them in a simple way – limiting friction in user interaction for providers and users alike, as foreseen or panned in various innovative laws.
ADPC was developed as a part of our RESPECTeD project, a joint project with NOYB, that was led by Soheil Human and Max Schrems.
You can find more information on:
or follow ADPC-updates on: https://twitter.com/ADPC_Spec
Thank you for supporting the development of ADPC in the last years. It was not possible without many of you!
LET’S CONSTRUCT A HUMAN-CENTRIC AND SUSTAINABLE DIGITAL WORLD TOGETHER!

Enhancing Information and Consent in the Internet of Things

June 9, 2021 in Opinion
Victor Morel

Enhancing Information and Consent in the Internet of Things

Victor Morel

Victor Morel has recently joined the Sustainable Computing lab. In this blog post, he introduces the project that he has recently successfully finished, i.e. his PhD thesis.

Motivation

The introduction in 2018 of the General Data Protection Regulation (GDPR) imposes obligations to data controllers on the content of information about personal data collection and processing, and on the means of communication of this information to data subjects. This information is all the more important that it is required for consent, which is one of the legal grounds to process personal data. However, the Internet of Things can pose difficulties to implement lawful information communication and consent management. The tension between the requirements of the GDPR for information and consent and the Internet of Things cannot be easily solved, it is however possible. The goal of his thesis is to provide a solution for information communication and consent management in the Internet of Things from a technological point of view.

A generic framework for information communication and consent management

To do so, he introduced a generic framework for information communication and consent management in the Internet of Things. This framework is composed of a protocol to communicate and negotiate privacy policies, requirements to present information and interact with data subjects, and requirements over the provability of consent.

Technical options

The feasibility of this generic framework is supported with different options of implementation. The communication of information and consent through privacy policies can be implemented in two different manners: directly and indirectly. Different ways to implement the presentation of information and the provability of consent are then presented. A design space is also provided for systems designers, as a guide for choosing between the direct and the indirect implementations.

Prototype implementations

Finally, fully functioning prototypes devised to demonstrate the feasibility of the framework’s implementations are presented. The indirect implementation of the framework is illustrated as a collaborative website named Map of Things. The direct implementation combined with the agent presenting information to data subjects is sketched as a mobile application CoIoT.

Call for Participation in W3C Consent Community Group

March 5, 2021 in Announcements

Soheil Human

The concept of consent plays an essential role in the use of digital technologies as an enabler of the individual’s ownership, control, and agency. Regulations such as the GDPR assert this relationship by permitting use of consent as one of the possible legal bases for the lawful practice of data processing. Through this, obtaining consent is widely practised in the digital world, and can be perceived as an essential means to enable the individual’s agency regarding the management and ownership of their personal data. While different legal frameworks specify various requirements and obligations regarding the legal validity of consent, which should be, e.g. valid, freely given, specific, informed and active; existing and ongoing research shows that the majority of people are not empowered to practice their digital right to privacy and lawful “consenting” due to various malpractices and a lack of technological means acting in the individuals’ interest.

The W3C Consent CG (https://www.w3.org/community/consent/) aims to contribute towards the empowerment of humans concerning their rights of privacy and agency, by advocating interdisciplinary, pluralist, human-centric approaches to digital consent that are technologically and legally enforceable.

The mission of this group is to improve the experience of digital “consenting” while ensuring it remains adherent to relevant standards and laws. For this, the group will: (i) provide a space for people and stakeholders to come together (ii) highlight and analyse concepts, issues and problems about digital consenting (iii) propose and develop solutions. Some concrete areas for the working of this group are: (a) developing interdisciplinary solutions; (b) documenting and achieving legal compliance; (c) improving the user experience; and (d) utilising existing and developing new concepts and standards for digital consent.

In order to join the group, you will need a W3C account. Please note, however, that W3C Membership is not required to join a Community Group.

Global Privacy Control (GPC) + GDPR: will it work?

February 26, 2021 in Opinion

Global Privacy Control (GPC) + GDPR: will it work?

Global Privacy Controls (GPC) represents a signal to opt out of data sharing. Will it work with GDPR?

Global Privacy Control (GPC) is a boolean or binary signal sent by browsers to websites to indicate the user’s request for not sharing (or selling) their personal data with third parties. The authors (and supporters) of this specification include people from New York Times, Wesleyan University, DuckDuckGo, and Brave (with many other researchers and supporters). This makes it not a toy project, given that a big publisher, search engine, and web browser vendor is actively supporting its implementation and adoption.

Today, GPC tweeted uptake numbers into “hundreds of thousands” with inclusion by major publishers in the USA, and WordPress. GPC is legally enforceable under CCPA where it acts as the ‘opt-out’ for ‘selling’ personal data, as confirmed in a tweet by AG Becerra (California). My interest in writing this is to explore how GPC relates to the other data protection and privacy law across the Atlantic – the General Data Protection Regulation.

What is the GPC?

In essence, GPC is DNT reborn. It is a singular signal that when set or trueindicates that the user has requested the controller (the website the signal is sent to) to not share or sell their data with third parties. In essence, it is a request to stop or opt-out of sharing/selling of personal data to third parties. Given its binary or boolean nature, the GPC is simple to send, read, and evaluate. It is either set or true or it is not. The specification goes into more details regarding the HTTP requests, headers, and structure for using the signal and its interactions. It also deals with how website can indicate their support (or lack of) for abiding to the signal.

GPC data-flow

The GPC works somewhat in the following manner:

  1. I go to a website using a web browser where GPC is set to on
  2. I consent to a notice
  3. The web browser sends the GPC signal to the website (this may already have occurred before Step.2) to indicate request to opt-out
  4. Website abides by the request and stops sharing data with third parties

Legality

The GPC spec mentions that websites are responsible for conveying how the signal is going to be used or interpreted, based on their operating and applicable jurisdictions and binding regulations. Under CCPA, the GPC has teeth to be legally enforceable, and thus we have a large (and expanding) adoption across platforms. The spec also specifically mentions GDPR, and quotes the potential legal clauses it can use. I’m copying it verbatim here:

The GDPR requires that “Natural persons should have control of their own personal data” ([GDPR], Recital 7). The GPC signal is intended to convey a general request that data controllers limit the sale or sharing of the user’s personal data to other data controllers ([GDPR] Articles 7 & 21). This request is expressed with every interaction that the user agent has with the server.

Note that this request is not meant to withdraw a user’s consent to local storage as per the ePrivacy Directive (“cookie consent”) ([EPRIVACY-DIRECTIVE]) nor is it intended to object to direct marketing under legitimate interest ([GDPR]).

In addition, Robin Berjon (New York Times), one of the authors of the spec, elaborated more about workings through a debate in a Twitter thread. Paul-Oliver Dehaye (founder of PersonalData.io and of “The Great Hack” documentary fame) then quipped about possibility of using GDPR’s Code of Conduct mechanism to make GPC enforceable.

Has any EU data protection expert reviewed this? Companies have no obligation to honor a particular method chosen by the data subject to exercise their rights (unfortunately).

This being said, Art 40.2.f (Code of Conduct) does offer a chance to move in the right direction.

Others also pointed out various takes and relations to GDPR and DNT. See tweets by Nataliia Bielova regarding broader applicability to the framework of legal bases under GDPR, Ralf Bendrath discussed applicability of Article 21 of GDPR regarding right to object. Irene Kamara and Lucas shared articles (thisand this) about DNT being useful in today’s world.

What does GDPR say about consent?

GDPR has consent as a lawful basis for processing in Article 6(1-a) for personal data, and Article 9(2-a) for special categories of personal data, and others, such as data transfers, but lets focus on these broadly as ‘consent’. About withdrawal, Article 7(3) states the following:

The data subject shall have the right to withdraw his or her consent at any time. The withdrawal of consent shall not affect the lawfulness of processing based on consent before its withdrawal. Prior to giving consent, the data subject shall be informed thereof. It shall be as easy to withdraw as to give consent.

Notably, GDPR does not have ‘opt-outs’. It explicitly requires an ‘opt-in’ via consent (where it is the legal basis), and the request to stop sharing data with a third party is equivalent to withdrawing the consent for it. Under GDPR, consent for purposes and processing actions that are separate must also be given separately. That is, consent for sharing data with controller is one instance of consent, and sharing that data further with a third party should be a separate instance of consent. Recital 43 of the GDPR says:

Consent is presumed not to be freely given if it does not allow separate consent to be given to different personal data processing operations despite it being appropriate in the individual case

For inclusion, Article 21 of GDPR relates to the Right to Object. Specifically, Recital 69 says,

Where personal data might lawfully be processed because processing is necessary for the performance of a task carried out in the public interest or in the exercise of official authority vested in the controller, or on grounds of the legitimate interests of a controller or a third party, a data subject should, nevertheless, be entitled to object to the processing of any personal data relating to his or her particular situation.

Thus, if consent is the legal basis, then withdrawing should limit the sharing of data with third parties. And if legitimate interest is the legal basis, then exercising the right to object should limit it. This is (probably) what GPC mentions in its specification about applicability for GDPR.

Why I’m feeling unsure

GPC is an exciting development for me. It is the first time (for me) where people have got together, created something, managed to roll it out, and even have a law that legalises its enforcement. I’ve thought about this many times, and there are several large questions that loom out to me whenever GPC comes across. Through GPC’s own specification, and admission, its applicability and enforceability under GDPR is ambiguous at best, and non-existent at worst. Where the CCPA has provisions that can be applied directly to make request about sharing data with third parties, the GDPR does not specify any such broad restrictions, and instead relies on its framework of legal bases and rights.

This distance between legalese and real world has been a point of pain, contention, and frustration as we see no actions against large scale and systemic consent mechanisms that misuse legal basis, purposes, and are clearly falling afoul of GDPR compliance. So even a regulator weighing in on the applicability of GPC is no guarantee of its applicability because (a) there are ~50 DPAs in EU so there needs to be uniformity in interpretation, something the EDPB would be likely to be involved with, and (b) unless case law explicitly outlines that GPC is enforceable, there is always scope for someone raising objections to using it.

Even without these, the process of applying GPC is unconvincing to me, no matter how well intentioned it is. I feel that it has some weird loopholes that it does not clarify upon, and as a result, there are too many uncertainties – which in the GDPR and adtech world translate into as loopholes, exploits, and malpractices.

#1 Setting GPC off could mean share with everyone

Let us pretend that I use an GPC-enabled browser, and I visit a website that requests my consent under GDPR. My browser has probably signalled to the website or the website or its consent CMP has checked whether I use GPC. Under GDPR, consent choices should be set to a default of “no” or “off” or “prohibit”. Therefore, the interpretation of the GPC should have no effect on the default choices. However, if the GPC is set to an explicit off, then there one could argue for a case to be made to set the consent defaults to permit third-party data sharing since the individual clearly wishes it (through GPC = off).

#2 GPC vs Agree button – who wins?

Lets say I agree to sharing my data with a third party, knowingly, and intentionally, by using the choices in the consent dialogue. Now I have indicated my wishes but the GPC signal indicates otherwise. What should a website / controller do in such a situation where the user’s consent is in conflict with an automatic signal? I would presume that a rational decision would be to respect the user’s choice over the user’s automatic agent’s choice. And this here is a subtle avenue for manipulation, where as long as individuals continue to click on the Agree and Accept All buttons, the GPC could be argued to have been overridden by the user’s choices. For proponents of imbalanced consent requests, I’m speaking about hypothetical scenarios where the choices and interactions are actually valid.

Where GPC does benefit is when the consent dialogue is malicious and abusive. In such cases, we want the GPC to enforce a right to withdraw or object despite us having clicked on Agree to All. This also forms the elevator pitch for adopting GPC: “don’t worry, click on the agree buttons, we’ll send a withdraw request right along with it”. So which method should we go with? Should GPC override the consent choices or vice-versa? I imagine this is a chicken and egg problem (though the egg definitely came first because evolution).

A more generous interpretation and argument is that CMP vendors or providers would somehow integrate the GPC into the choices. This is a fallacy as long as the Accept All button exists – because along with it, the dilemma above also exists. In wonderland, the CMP would actually respect the GPC signal and turn off the sharing choices no matter what agree button you choose., or make you set them explicitly to affirm your choices.

#3 Tiny windows of opportunities and leaky pipelines

The crux of the issues for consent online stem from the mess that is the adtech ecosystem consisting of data sharing with thousands of websites, real-time bidding, and impossible demands of informed choices, all built on the backbone that is IAB TCF ‘framework’. In this, the moment you hit Agree, a signal is sent out to all controllers along with all of the data you consented to. Let us imagine this is what really happens for a moment. You click Agree and your personal data is sent to all of the thousands of third parties mentioned in the dialogue. Now, my browser also sends a GPC signal. Who receives it?

If the GPC is used by the CMP to block data being sent to the third parties, then we’re back at the problem in #2. If all the third parties receive the GPC signal, what are they supposed to do, and will they do it? What if the third parties claim that they will respect the GPC signal, but it will take time to process and implement? That leaves a tiny window of opportunity, where that third party has the personal data and my consent to process it for their desired purpose. In this case, GPC probably only restricts continued processing.

To think further along these lines, how will I know whether a third party has actually respected my GPC signal or my consent or both or neither? There is no requirement to confirm withdrawal of consent, and since GPC is automatic, one can presume there could be an automatic signal sent back in acknowledgement. But who is keeping track, where, and how? If the IAB decides to include the GPC signal in a future update to the TCF, will it make it mandatory to check the GPC for all consent interactions (nothing else will work)? Even if the answer is yes, we are still going to be sharing data with a third party. Thus, we have leaky pipelines of data that look like they might be respecting the GPC but could actually be malicious actors or claim innocence under the guise of technical naughtiness.

#4 Which of my consents does GPC represent?

GPC is singular, i.e. there is only one GPC signal AFAIK sent by the browser. There is no way to say, or associate the GPC with a particular consent. So will the GPC blanket withdraw or object everything and everywhere? What if I have given consent to A as a third party, but don’t want to give to B? In this case, will GPC request revocation to both? I know that GPC can be indicated per website, and can be checked per website when giving consent (I think, as per the specification and assumption that CMP takes it into account). But then there is an uncertainty as to whether my consent still applies or has been withdrawn by the GPC. Further, if controllers silently accept (or worse, ignore) the GPC – how do I keep track of what impact that automatic signal is having, and on which of my consents.

Lots of promise, Lots of worries

My nightmare is the GPC having a global and wide adoption, and then being abused for loopholes all around. It is likely to happen, because, common, look at any random website to see what we live with. So why don’t we take time to think this through, and find these weird cases, discuss it, and close them as and how we can. This blog post is a think-aloud type of draft I’ve just written for the sake of thinking about GPC. I intend to study it more, think about it in terms of GDPR, and then perhaps update this article as I come across new information and consequences.

CfP: Human-centricity in a Sustainable Digital Economy

February 23, 2021 in Announcements

CALL FOR PAPER

The 55th Hawaii International Conference on System Sciences Mini-track on

Human-centricity in a Sustainable Digital Economy

http://hicss.hawaii.edu | HICSS-55 | January 4 – 7, 2022 | Hyatt Regency Maui, Hawaii, USA
Paper Submission Deadline: June 15, 2021 | 11:59 pm HST

The internet and the global digital transformation have changed many different aspects of our lives. Not only the economies and the societies but also people’s personal lives have been influenced by this new and ever-emerging era of our history. While the digital age has made it possible to provide novel services and solutions for the end-users, it has also caused serious concerns in different individual and societal levels, such as issues regarding online privacy, algorithmic bias, fairness and accountability of information systems, transparency, governance, and explainability of information systems, end-user manipulations, fake news, traceability, etc. The development of human-centric and end-user empowering information systems can be one approach towards “digital sustainability” since they enable individuals to influence how their data is used, by whom, and for which purpose. Many novel and personalized services are emerging in this direction, which make the digital economy sustainable, i.e. a positive place that focuses on human users. 

This minitrack aims to attract research that advances the understanding of human-centricity and end-user empowerment in a sustainable digital economy. As the transformation is multidimensional in nature, the minitrack adopts an interdisciplinary perspective, which considers human-centricity and end-user empowerment across application domains (e.g. software development, digital commerce, healthcare, administration, mobile apps, social media, and online services) and disciplines (e.g. economics, computer science, sociology). Among the relevant topics are:

  • Characteristics and design of sustainable human-centric information systems
  • Evaluation of existing information systems from a human-centric perspective
  • Co-creation and co-production of human-centric sustainable information systems
  • Analysis and design of technologies (e.g. AI, Blockchain) that empower end-users
  • Design of human-centric end-user agents, AI and machine learning
  • Fairness, transparency, accountability and controllability of information systems
  • Legal or economic aspects of human-centricity in information systems
  • Identity, privacy and consent management systems
  • Business value of human-centric and/or user empowered solutions
  • Sociotechnical studies of human-centricity in information systems
  • Opportunities and challenges of digital behavior change, habit formation, and digital addiction
  • Digital nudging for increasing social or ecological responsibilities
  • Ethical concerns regarding human-centricity and/or sustainability
  • COVID-19’s impact on human-centricity or sustainability of information systems

Publication of Papers

HICSS is the #1 Information Systems conference in terms of citations as recorded by Google Scholar. Presented papers will be included in the Proceedings of HICSS-55. Selected papers will be invited for a fast-track in Electronic Markets – The International Journal on Networked Business.

A Special Issue on “Human-centricity in a Sustainable Digital Economy” at Electronic Markets is planned.

Important Dates

June 15, 2021 | 11:59 pm HST: Paper Submission Deadline

August 17, 2021: Notification of Acceptance/Rejection

September 22, 2021: Deadline for Authors to Submit Final Manuscript for Publication

October 1, 2021: Deadline for at least one author of each paper to register for HICSS-55

January 4 – 7, 2022: Paper Presentations

Organizers

Soheil Human, Vienna University of Economics and Business (WU Wien), Austria

Gustaf Neumann, Vienna University of Economics and Business (WU Wien), Austria

Rainer Alt, Leipzig University, Germany

About the HICSS Conference

Since 1968, the Hawaii International Conference on System Sciences (HICSS) has been known worldwide as the longest-standing working scientific conferences in Information Technology Management. HICSS provides a highly interactive working environment for top scholars from academia and the industry from over 60 countries to exchange ideas in various areas of information, computer, and system sciences.

According to Microsoft Academic, HICSS ranks the 36th in terms of citations among 4,444 conferences in all fields worldwide. The Australian Government’s Excellence in Research project (ERA) has given HICSS an “A” rating, one of 32 Information Systems conferences so honored out of 241 (46-B and 146-C ratings). Data supplied by the Australian Research Council, December 2009.

Unique characteristics of the conference include:

  • A matrix structure of tracks and minitracks that enables research on a rich mixture of cutting-edge computer-based applications and technologies.
  • Three days presentations of peer-reviewed papers and discussions in a workshop setting that promotes interaction leading to revised and extended papers that are published in journals, books, and special issues as well as additional research.
  • A full day of Symposia, Workshops, and Tutorials.
  • Keynote addresses and distinguished lectures which explore particularly relevant
  • topics and concepts.
  • Best Paper Awards in each track which recognize superior research performance.
  • HICSS is the #1 IS conference in terms of citations as recorded by Google Scholar.
  • A doctoral consortium that helps participants work with senior scholars on their
  • work-in-progress leading to journal publications.
  • HICSS panels that help shape future research directions.

Author Instructions

  • http://hicss.hawaii.edu/authors/

CfP: Special Issue on Accountability Mechanisms in Socio-Technical Systems in the Journal of Responsible Technology

February 8, 2021 in Uncategorized
Journal of Responsible Technology

Journal of Responsible Technology

There are growing demands for greater accountability of socio-technical systems. While there is broader research on a range of issues relating to accountability such as transparency or responsibility, more concrete proposals for developing accountability mechanisms that reflect the socio-technical nature of information systems are less discussed. A key challenge in existing research is how to imagine information systems which promote accountability. While this is part of the wider debate on fairness accountability and transparency principles in the FAccT community and around explainability and bias in artificial intelligence, more concrete proposals for developing socio-technical accountability mechanisms are seldom discussed in detail.

This need for accountability should be reflected both at technical levels, as well as in the socio-technical embeddedness of the systems being developed. By trying to specifically isolate the accountability mechanism within socio-technical systems, we believe it is possible to systematically identify and compare such mechanisms within different systems, as well as push for a debate about the effectiveness of such mechanisms.

This special issue focuses on the mechanisms for tackling issues of accountability in socio-technical systems. The goal is to provide a forum for proposing, describing and evaluating specific accountability mechanisms; exploring the challenges of transforming more abstract notions of accountability into practical implementations; for critical perspectives on different accountability approaches; highlighting the successes as well as challenges from practical use-cases; and so forth.

Recognizing that the challenges are socio-technical, we solicit papers from a range of disciplines. Given the practical focus of this special issue, we specifically encourage papers that discuss accountability from a technical, organisational, legal or STS perspective. We see this special issue as a way to close these gaps by engaging with the existing debate on accountability.

Potential areas of interest for submissions include, but are not limited to:

– user cognition and human behaviour in relation to the design of interfaces that promote accountability
– increasing the accountability of automated decision-making systems and decision-support systems
– ensuring accountability in public sector systems
– perspectives to accountability in the context of real-world technologies
– contributions that bring together technical and non-technical perspectives
– critical examinations of existing accountability technologies and mechanisms aimed at gaining new insights about their socio-technical characteristics and implications.

In all these and further areas, accountability in socio-technical systems needs to be addressed more systematically. The concrete implementation of such accountability mechanisms has so far received only limited attention. Similarly, the challenges arising during such transformations of abstract accountability concepts into concrete implementations as well as the critical evaluation of respective implementations are only rarely covered by existing research. We see this special issue as a way to close these gaps by engaging with the existing debate on accountability.

Submission Guidelines

CfP: https://www.journals.elsevier.com/journal-of-responsible-technology/call-for-papers/accountability-mechanisms-in-socio-technical-systems

Authors should follow the Journal guidelines for paper submission. Full details are available here: https://www.elsevier.com/journals/journal-of-responsible-technology/2666-6596/guide-for-authors

Submissions must be made through the Editorial Manager submissions system via the following link: https://www.editorialmanager.com/jrtech/default.aspx

For questions about special issue submissions or the review process, please don’t hesitate to contact the Guest Editors here: b.wagner@tudelft.nl

Relevant Dates

– Submissions open from 1 February 2021

– Submissions due by 30 June 2021

Guest Editors
– Ben Wagner, TU Delft
– Jat Singh, University of Cambridge
– Frank Pallas, TU-Berlin
– Florian Cech, TU Vienna
– Soheil Human, WU Vienna

User – Quo vadis?

January 19, 2021 in Lab updates

Marie Therese Sekwenz

“The perfect map is just an illusion” – This statement visualises the problem occurring in conjunction with mapping anything. Therefore, a series of expert talks were held under the leading title Perceiving Time/Space through Apps: Human-centric Digital Transformation in GIS (geographic information systems). This Sustainable Computing Lab and MyData Hub Austria Meetup #6 event tried to shed light on aspects of mobility, as well as useability and bias. The online meeting was attended by over 50 participants around the world. 

The recently published article by Soheil Human, Till Winkler and Ben Wagner takes a closer look at one of the most frequently used maps and its recommendation systems – Google Maps. “Technology shapes our world and behaviour” according to Ben Wagner, who held the first talk. The authors argue that the technology of Google uses a “one-solution-fits-all-users-approach” while the representation needed is not only user- but also context-dependent. Nevertheless, Google Maps supports a design of its recommendation to users for routing decisions, that inherently shows biased Maps to its users, because of its different assumptions in the comparison field of options, instead of using visualisation options that give the user a better understanding of each recommendation or option. To give an example from the aforementioned paper, the traveling time presented to the user might not take into account that the user is not already seated in the car, but rather has to walk to the vehicle. Another assumption of Google Maps is the absence of a representation for time consumed by looking for a parking space at the end of the rout. Therefore, in Google’s perfect world you will find a parking space wherever you desire it – also in the city centre. This representation can be seen as a constant nudging of the user that makes them slowly more likely to prefer the travelling option car over the option of public transport. While projects related to mobility are very costly, Stefanie Peer further explained the cost-benefit-analysis aspects of such mobility related questions. Here, the benefits can be understood as travel-time-gains or improvements in terms of comfort. 

These benefits have to be monetized in order to calculate the “willingness of people to pay for shorter travelling times”. Stefanie Peer also stresses that this ‘objective‘ travel time can only be seen as a measure and might therefore differ for every user according to their circumstances and travel-needs. Robert Braun tries to answer rather philosophical questions related to automobility such as, “What can be understood under the term Automobilty?” or “To what degree is it just a social construct? Are we talking about euclidean space or is it the semiotic space? is it the representation? Is this geographical reality or is it a produced reality after all?”. 

These questions around data, reality and representation seem to be a key challenge of the today. Furthermore, this discourse is political and coloured though data sovereignty in Europe. While Europe has used digital infrastructure form America or China, the trend leads towards a direction that desires to establish a concept known as the European Public Sphere. This paradigm shift can also be related to the Society-of-Things according to Robert Braun, that describes not only an IoT (Internet of Things) understanding of the world, but further uses its principles of overall connectivity for the entire society.

On the other hand, Robert Braun brought up the debate about data-accessibility and data-ownership. Currently data often is stored in data silos that only grants few people access to it and cannot be used additionally e.g. for research purposes. This is why Robert Braun wants to actively contribute to creating a data economy with the user and data-creator in the center.

Martin Semberger form the Austrian Federal Ministry for Digital and Economic Affairs (BMDW) argues that the state is an important actor within the field of data in general and mobility data as a specific. Martin Semberger is an expert on European digital-single-market topics like the reuse of public information, open data and public sector information. Martin Semberger also stresses the importance of legally addressing the digital economy, where he describes the General Data Protection Regulation (GDPR) as a “global benchmark”. 

The directive on open data and the re-use of public sector information Martin Semberger has worked on, wants to lead the way towards an open data to society.

The directive therefore creates “Ways and means on how we can compare [options] to make better use of data”

While we live in a marginal-cost-society in the digital world, Martin Semberger argues that 

the general principle in Europe is that all publicly financed data should be openly available in order to boost the potential for creativity for innovation and for the economy.

The directive provides for provisions that account for e.g. standard licenses (here Creative Commons Licenses should be given an advantage), transparency conditions, requests, non-discrimination clauses etc. Through this a more democratic playing field should be settled within the European Single Market. Martin Semberger also mentions the European Data Governance Act, that takes the secure sharing of data into consideration. 

The last presentation centered around sustainable automobility. Florian Daniel – the innovation manager of Carployee described their innovative carpooling App. Carployee can be used by companies for reducing the overall company CO2-footprint and brings together effective solutions for driver and the chauffeured user. Through gamification and HR-related incentives Carployee tries to make carpooling flexible and comfortable. In this use case data is used to schedule routes and nudge the user towards a more sustainable behavior in their mobility decisions. An example of nudging technology that promotes climate goals in comparison to Google Maps.