Lectures4Future: Alexander Novotny on Cybersecurity and Sustainable Human Development

May 15, 2020 in Lab updates
Alexander Novotny

Lectures4Future: Alexander Novotny on Cybersecurity and Sustainable Human Development

May 14, 2020 in Lab updates

Alexander Novotny

As the digitalization of society and economy is progressing, global cybersecurity efforts are increasing accordingly. Cybersecurity plays a dual role as it aims protecting humans from cyber threats but at the same time introduces new issues for sustainable human development. Some of the key challenges relate to hacking for undermining democratic societies and value conflicts between cybersecurity and protecting privacy. The “Lectures for Future” series addressing the most pressing societal and ecological challenges, a lecture series with guest lecturers from multiple universities and research institutions, has taken up some of the sustainability challenges with cybersecurity in a lecture held by Dr. Alexander Novotny of the Sustainable Computing Lab.


Lectures4Future: Cybersecurity and Sustainable Human Development from Alexander Novotny on Vimeo.

Here is a short summary of the lecture:
Its worthwhile to have a closer look on how cybersecurity can contribute to human development sustainably. The United Nations have issued sustainable development goals that provide a framework for defining in which dimensions the global society needs to develop sustainably, including “good-health and well-being”, “quality education” for everyone, prospering “industry, innovation and infrastructure” as well as assured “peace, justice and strong institutions”. Which impact do recent developments in the cybersecurity landscape have on these dimensions? Can they foster sustainable human development, or do they impose new threats?

In the 2016 and the current U.S. presidential elections fake news, disinformation in the social media and cyber hacking efforts have been playing an increasingly crucial role to turn the tipping point of public opinion. Disinformation is information that is factually false or intended to be highly misleading and of which the originator believes it to be false or misleading but states the information as if it were true. Disinformation has the intention to manipulate the opinion of recipients. For spreading disinformation, Botnets, troll factories, fake online groups, fake emails, the evasion and poisoning of artificial intelligence engines with fake input data such as used in search engines have been proved to be handy strategies for hackers. In the social media, platform operators try to identify disinformation by moderating content following internal content policies with mixed success. But internal policies defined by private companies are withdrawn from public and judicial review and supervision. This lack of transparency over content moderation practices in the social media form part of the problem of dealing with disinformation. The impact of disinformation in the social media may be even worse in developing countries, where the education and media competence of users tends to be lower than in developed countries.

Against these cyber threats, nation states and large institutions increasingly employ advanced cybersecurity protection systems, that intelligently correlate user activity with multiple sources of information to timely detect threats. Intelligent systems for security incident and event management (SIEM), cloud security access brokers (CSAB), intelligent traffic-inspecting next-generation firewalls and identity and access management (IAM) systems fall into this category. Advanced cybersecurity protection systems may be sustainably practical to reduce cybercrime and protect control systems steering the critical infrastructure such as these providing us with clean drinking water, uninterrupted supply of electricity and functioning medical equipment in hospitals. But the analysis of user behavior and the inspection of online traffic by automated systems also comes with the risk of increased surveillance and a reduced level of privacy.

Key to protecting the privacy of users in advanced cybersecurity protection systems is a technology that is called tokenization, basically exchanging the true identity of a user for a random token number and securely keeping the mapping of the token to the user at a so-called trusted party thereby providing pseudonymity. As such, only the trusted record-keeper is able to re-identify a user and his or her behavior. But whom can we trust to keep that records? State actors or private companies, who both have a strong interest in applying intelligent cybersecurity protection systems? With these technologies in place, Internet users may have a constant feeling of being surveilled, remarkably similar to Jeremy Bentham’s panopticon, thus threating their well-being in the long run. Only guaranteeing true anonymity instead of reversible pseudonymity could be a more sustainable way of dealing with this type of protection.

Cybersecurity can be a powerful enabler of digitalization and support sustainable human development. If we miss out on considering and reconciling cybersecurity with human values such as well-being, truth and privacy, we run the risk of undermining sustainable development goals including strong democratic institutions”, quality education for everyone and a prospering, innovative industry.

The full lecture can be watched here:
https://vimeo.com/413215168

RESPECTeD Project: Really Enforceable Solution to Protect End-users Consent & Tracking Decisions

December 2, 2019 in Lab updates, Projects

 

We are delighted to inform you that the Privacy and Sustainable Computing Lab in collaboration with the Institute of Information Systems and Society (WU Wien), the Institute of Information Systems and New Media (WU Wien), and NOYB – European Center for Digital Rights (noyb.eu) starts the interdisciplinary research project RESPECTeD. The full title of the project is “Really Enforceable Solution to Protect End-users Consent & Tracking Decisions”. Soheil Human and Ben Wagner are the project leads representing WU Wien and Max Schrems is the project lead from the NOYB-side. The project is funded by the NetIdee programme of Internet Privatstiftung Austria (IPA).

Continue reading »

ACCOUNTABLE-MOBILITY Project: Towards Realization of Accountable Multi-modal Smart Mobility in Vienna

November 13, 2019 in Lab updates, Projects

We are delighted to inform you that the Privacy and Sustainable Computing Lab in collaboration with the Institute of Information Systems and Society (WU Wien), the Institute of Information Systems and New Media (WU Wien), and the Institute for Multi-Level Governance and Development (WU Wien) has started the ACCOUNTABLE-MOBILITY project. The full title of the project is “Towards Realization of Accountable Multi-modal Smart Mobility in Vienna: Do Smartphone Apps Influence Mode Choice Behavior among Viennese Citizens? The Role of User-interface Design in Influencing Users’ Mobility Behaviour in Vienna”. Soheil Human, Stefanie Peer, Ben Wagner and Till Winkler will collaborate on this project, which is funded by the City of Vienna WU Jubilee Fund.

Project Background‌

Mobility is one of the central focus areas of The Smart City Wien framework strategy which aims to provide “[t]he best quality of life for all inhabitants of Vienna, while minimising the consumption of resources. This will be realized through comprehensive innovation” until 2050. The interdisciplinary state of the art in Accountable Information Systems, Transport Economics, Human-computer interaction (HCI), etc. suggests that the emerging smart approaches to mobility are going to be increasingly personalised and Multi-modalMobility Apps (such as Google MapsQando WienWienMobile etc.) are more and more involved to support individuals to choose their mode of transportation for their everyday in-city travels. However, there is little known on to which extent these Apps actually influence mode choices in Vienna. If we assume that the Apps do influence mobility behavior of Viennese citizens, it is crucial that the information provided in the Apps (especially regarding travel times) is unbiased. However, our pre-studies have indicated that some of the mobility Apps (such as Google Maps) are sometimes biased in favour of private motorised vehicles (personal cars, or car-sharing) and are hence inconsistent with the sustainability goals of the city of Vienna. In other words, there is no doubt that such Apps could play a fundamental supportive role in the emerging transport economy of Vienna; However, they could also mislead the individuals in their mobility decision making if they are not accountable and value-based regarding how they calculate different travel attributes, and how (and what) they show to the users.

Project Goals

In this project, we conduct a representative survey to understand to which extent (and for which trips) Viennese citizens use different mobility Apps, and to which extend such Apps influence the mobility behaviour of people in Vienna. Moreover, we aim to develop a roadmap for the future development of research and policy on digital mapping technologies in Vienna. This roadmap will build on extensive literature research, exploratory HCI approaches and the representative study of app usage in Vienna while also integrating the Smart City and wider transportation public policy priorities of the City of Vienna. This roadmap will form the foundation of a long-term collaboration between WU and the City of Vienna regarding App-based mobility policy. As a result, the roadmap will be finalised in a joint workshop between WU and the City of Vienna, providing clear paths for joint collaboration and future research in this area.

Professional Training at the Austrian Data Protection Authority

June 6, 2019 in Lab updates

PhD candidate Esther Görnemann held an advanced training session for the Austrian Data Protection Authority on the Market, Functioning and user’s privacy concerns regarding Amazon Alexa. The training documentation was shared among the European Data Protection Board and is now available on the Privacy lab website. 

The presentation covers a broad spectrum of issues around the technology behind voice assistants. It gives a comprehensive but understandable overview of the functioning of Wake-word Detection, speech recognition, Language Understanding and Language Generation. The author conducted a qualitative research study focusing on the interactions between users and the Amazon Alexa voice assistant and uncovered a number of specific privacy and security concerns users expressed. In the given presentation, these user concerns will be addressed one by one with the goal of identifying how legitimate and realistic they are. For this effort, published research, hacking attempts, submitted patents, public statements, media coverage and user experiences are efficiently analyzed and combined to reach satisfactory conclusions.

Core findings

  • The wake word detection module balances out precision and latency, causing false positives: the wake word is often recognized although it was not said by the user. 
  • Recent research points to discrepancies between transmitted recordings and recordings accessibly stored in the user’s profile.
  • Recent patent submissions suggest that voice recordings can be used to infer detailed and intimate knowledge about the user, especially in combination with other available information.
  • Privacy policies, Product information and Terms of Use are formulated vaguely and do not provide exhaustive information
  • The lack of access control and user authentication causes specific privacy and security issues

Amazon Alexa – Market, Functioning, user’s privacy concerns

About the author

PhD candidate Esther Görnemann works at the Institute of Information Systems and Society at Vienna University of Economics and Business, supervised by Prof. Sarah Spiekermann-Hoff.

The documentation has been developed for a professional training held April 24, 2019 at the Austrian Data Protection Authority in Vienna.

As part of the ITN Privacy&Us, this research project received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 675730.

New Book by Prof. Sarah Spiekermann-Hoff: Digitale Ethik. Ein Wertesystem für das 21. Jahrhundert

April 30, 2019 in Lab updates

Prof. Dr. Sarah Spiekermann-Hoff presented her new book “Digitale Ethik. Ein Wertesystem für das 21. Jahrhundert“, published by Droemer on April 1, 2019 in Vienna.

The author asks how humane progress can be created and maintained in a digital society in times of Big Data and Artificial Intelligence. The book takes into account the individual, economical and technical prerequisites for a future that perceives technology as enabling and strengthening, rather than a weakening influence. She writes about values that are important to consider, in order to support good innovations. Thinking ethically enables innovation. Digitalization has its own dynamic, which we as a society can direct towards true progress.

Prof. Spiekermann-Hoff describes her journey from working in IT to academia, providing insights about the “Weltgestalter” in Silicon Valley, making critical reflections about the benefits and consequences of technology.

In 1996, she started out in a Silicon Valley Company, which manufactured hardware for computer network technology. In the New-Economy years, the developing Web 1.0 was seen as a tool for a peaceful society, providing a myriad of opportunities. Intrigued by the new technologies, she worked in the telecommunications sector for a couple of years. Subsequently, she got her PhD in Berlin and wanted to start a post-doc in Berkley, but her research question namely the privacy in Artificial Intelligence was not opportune after the terror attacks of 9/11. Thus, she returned to Silicon Valley to work for a Technology start-up company in the epicenter of the development of smartphones. She soon realized that companies mainly followed hypes and did not have a place for values like privacy or the truth. To align herself with her own values, she started working in science. The values guiding progress and digitalization are still in the center of her work and she tries to convey them to her students at University of Economics and Business in Vienna. Students have to develop a product roadmap in one of her classes, which serves the purpose of advancing the technical development of a company. The students have to come up with possibilities to enhance a company employing a food delivery service. In this context, the students develop many creative ideas like the optimization of the way to the client by an artificial intelligence and the use of a human-like computer voice software, to make customers feel more at ease. The author however proposes a different way to think about the issues of digitalization: Bicycle couriers have a very difficult job and having an app as their boss, does not make it any better. Thus, she proposes to include values into their account and realizing them by means of digital technologies. Additionally, employees and their priorities should be included, as well as the maxims and values maintained in a humane and fair society.

In the center of the book stands the search for an answer to the question how real progress can be fostered in a sustainable and good way, bringing society forward. The book provides insights about how values can play a bigger role in a digitalized society, emphasizing the reflection upon them for our choices and actions in a digital world.

The book is now available on Amazon:

https://www.amazon.de/Digitale-Ethik-Ein-Wertesystem-Jahrhundert/dp/3426277360

and in any book store.

Ben Wagner on Challenging Online Hate with the Power of Community

November 6, 2018 in Lab updates

Despite considerable efforts, hate speech remains a highly present online phenomenon. This is in no small part because hate speech is so difficult to identify. Hate speech is intersubjective construct, which makes it difficult to scientifically capture and measure. Both from a legal and societal perspectives, the understandings of what constitutes hate speech are widely different. Yet the impacts of hate speech are very real. It has spread throughout public discourse and has real world consequences for the people affected by it. Saying that it is hard to measure and therefore nothing can be done about it, is to abdicate responsibility for a deeply problematic societal phenomenon and to surrender public space to hatred.

In order to respond to this challenge, the Privacy and Sustainable Computing Lab at Vienna University of Economics have partnered with der STANDARD, an Austrian newspaper with one of the largest German-language online communities in the world. In the coming months we will be working closely together with der STANDARD to develop a design-based approach which changes the way the forum works in order to reduce the amount of hate speech in the forums. These design changes focus on strengthening the power of community within der Standard Forums and will be developed in close collaboration with the Forum users themselves.

While we do not believe that this project – or indeed any other technical system – can ‘solve’ or ‘fix’ hate speech, we hope that it may be able to make its appearance less frequent on der STANDARD forums. Also, as there are considerable difficulties in measuring hate speech, we intend to measure different legal, societal and practical aspects of hate speech, while acknowledging that these proxies for hate speech may differ. We also hope that this design-based approach will reduce the reliance on filtering techniques. Such techniques currently constitute one of the main responses to hate speech and are far from ideal. Not only do they frequently catch the wrong types of content, they are also frequently not very effective in preventing the appearance of hate speech more broadly.

At a time when numerous newspapers have decided to shut down their discussion forums, we believe that this project can contribute to strengthening the public sphere online. If we want to prevent this public space from shrinking further, we need better responses to hate speech than content removal. We believe that a design-based approach can contribute to reducing the prevalence of hate speech online by strengthening the power of community.

Soheil Human on EXPEDiTE: Human-centred Personal Data Ecosystems Made Possible

September 5, 2018 in Lab updates
Soheil Human

We are delighted to inform you that Privacy and Sustainable Computing Lab in collaboration with the Institute of Information Systems and New Media (WU Wien) has started the interdisciplinary research project EXPEDiTE. The full title of the project is “EXPloring opportunities and challenges for Emerging personal DaTa Ecosystems: Empowering humans in the age of the GDPR – A Roadmap for Austria“. The Linked Data Lab of the Vienna University of Technology and the NGO OwnYourData are our partners in this project which is funded by the Austrian Research Promotion Agency (FFG). Soheil Human and Dr. Ben Wagner are project leads for EXPEDiTE. Prof. Axel Polleres, Prof. Sarah Spiekermann, and Prof. Gustaf Neumann shape the advisory board of the project. Here is a short description of the project:

In our increasingly digital societies, data is perceived as a key resource for economic growth. Among the highest-valued corporations today, many have business models that are essentially based on data of or about their users. This development has raised serious concerns about individuals’ right to privacy and their ability to exercise control over their own data, as well as about the broader shifts of power between data subjects and data controllers that this development entails. Consequently, European policy makers have passed the General Data Protection Regulation (GDPR), which has imposed stricter regulations on personal data handling practices within the EU.

In parallel, a movement – often associated with the term “MyData” – has emerged in the civil society with the goal to put individuals in control of their personal data. One of the major adoption barriers for such platforms, however, has been the difficulty individuals face in acquiring their personal data from data controllers. The GDPR, which came into effect on March 25, 2018, requires data controllers to provide individuals access to all data they have about them, as well as to facilitate “portability” of that data. These new rights under the GDPR could drive the emergence of “human-cantered” personal data ecosystems, in which individuals’ roles are no longer limited to that of passive “data subjects”, but in which they become active stakeholders that have access to, exercise control over, and create value from their personal data.

However, although the provisions of the GDPR align closely with this vision, it is still largely unclear how they will be implemented by data controllers and whether and how citizens will exercise their digital rights in practice. In the EXPEDiTE project, we will investigate these timely questions, as well as more broadly explore challenges, barriers and drivers for the emergence of human-cantered personal data ecosystems. Furthermore, we will investigate how individuals — once they become able to acquire their personal data – can integrate and analyze it. To this end, we will explore the concept of context-rich personal knowledge graphs that “liberate” data from closed environments, link and enrich it with other available (e.g., open) data, and create insights and value from individuals’ previously dispersed data. Additionally, such graphs have the potential to facilitate innovative products and services without locking individuals into proprietary environments. In this project, we will tackle key technical challenges involved in realizing this vision, including technical interfaces, syntactic and semantic interoperability, and mechanisms that allow users to share data, exercise control over its utilization, and automatically manage consent for the use of their data. The project results will feed into a comprehensive roadmap that will assess the current state of personal data ecosystems in Austria and in the broader international context, synthesize the major challenges and opportunities they face, and outline a path towards human-centered personal data ecosystems.

The main aims of EXPEDiTE can be summarised as follows:

  • This interdisciplinary research project aims to resolve key tensions between providing personalized services and the right to privacy, by envisioning new human-centric personal data ecosystems (PDEs) in which personal data can only be collected and processed under the control of the data subject.
  • While the right to privacy is not new, the European General Data Protection Regulation (GDPR) will considerably contribute to the implementation of this right and provides new opportunities to study the way in which a balance between the right to privacy can be achieved through data portability or the right to access and modify personal data.
  • In this project, we will analyze how GDPR is perceived and implemented in practice, what key barriers hinder its implementation, and how the implementation of GDPR Exploratory Projects can be improved by changing the socio-technical configuration of actors.
  • We will use an interdisciplinary approach to develop a roadmap towards human-centric PDEs in Austria, which describes the current state of personal data processing in Austria, conceptualizes technical requirements of human-centric PDEs, develops the concept of personal knowledge graphs, discusses barriers and challenges ahead of human-centric PDEs, and envisions technological, social, and economic opportunities that human-centric PDEs will bring.
  • Finally, our project will connect Austria – for the first time – to the global MyData movement which aims to empower humans with their own personal data – enabling novel business models, innovative technologies, and R&D projects that not only benefit citizens within and beyond the borders of Austria, but also foster economic growth while respecting the right to privacy.

How Moments of Truth change the way we think about Privacy

July 12, 2018 in Lab updates

Esther Görnemann recently presented her work at the Lab as part of the Privacy & Us doctoral consortium in London. Her work provides an important perspective on the crucial role that the individual experience of Moments of Truth plays in understanding how human beings think about privacy and how under which circumstances they start actively protecting it. Here is a brief overview of her current research as well as a short introductory video.

During preliminary interview sessions, a number of internet and smartphone users talked to me about the surprising experience when they realized that personal information had been collected, processed an applied without their knowledge.
In these interviews and in countless furious online reports, users expressed concern about their device, often stating they felt taken by surprise, patronized or spied upon.

Some examples:

  • In an interview, a 73-year old man recalled that he was searching for medical treatment of prostate disorders on Google and was immediately confronted with related advertisements on the websites he visited subsequently. Some days later, he also started to receive email spam related to his search. He said “I felt appalled and spied upon” and ever since had begun to consider whether the search he was about to conduct might contain information he would rather keep for himself.
  • A Moment of Truth that made headlines in international news outlets was the story of Danielle from Portland who in early 2018 contacted a local TV station and reported that her Amazon Echo had recorded a private conversation between her and her husband and had sent it to a random person of the couple’s contact list who immediately called the couple back, to tell them what he had received. The couple turned to Amazon’s customer service, but the company was not immediately able to explain the incident. When she called the TV station, Danielle expressed her feelings: “I felt invaded. A total privacy invasion. I’m never plugging that device in again, because I can’t trust it.” While Amazon later explained the incident, saying the Echo mistakenly picked up several words from the conversation and interpreted them as a series of commands to record and send the audio, Danielle still claims the device had not prompted any confirmation or question.  
  • An interview participant recalled how he coincidently revealed that his smartphone photo gallery was automatically synchronized with the cloud service Dropbox. He described his reaction with the words “Dropbox automatically uploaded all my pictures in the cloud. It’s like stealing! […] Since then I’m wary. And for sure I will never use Dropbox again.”

Drawing from philosophical and sociological theories, this research project conceptualizes Moments of Truth as the event in which the arrival of new information results in a new interpretation of reality and a fundamental change of perceived alternatives of behavioural responses.

The notion of control or agency is one of several influential factors that mobilizes people and is key to understand reactions to Moments of Truth.

The goal of my research is to construct a model to predict subjects’ affective and behavioural responses to Moments of Truth. A central question is why some people display an increased motivation to protest and claim their rights, convince others, adapt usage patterns and take protective measures. Currently, I am looking at the central role that the perception of illegitimate inequality and the emotional state of anger play in mobilizing people to actively protect their privacy.

https://www.youtube.com/watch?v=jkq5TukhEu4

Ethics as an Escape from Regulation: From ethics-washing to ethics-shopping?

July 11, 2018 in Lab updates

I recently had the pleasure of attending a fantastic seminar on 10 Years of Profiling the European Citizen at Vrije Universiteit Brussel (VUB) which was organised by Mireille Hildebrand, Emre Bayamlıoğlu and her team there. As a result of this seminar I was asked to developed a short provocative article to present among scholars there. As there have been numerous requests for the article that I have received over the last few weeks, I decided to publish it here to ensure that it is accessible to a wider audience sooner rather than later. It will be published as part of an edited volume developed from the seminar with Amsterdam University Press later this year. If you have any comments, questions or suggestions, please do not hesitate to contact me: ben.wagner@wu.ac.at.

Ethics as an Escape from Regulation_2018

Workshop: Algorithmic Management: Designing systems which promote human autonomy

July 10, 2018 in Lab updates

The Privacy and Sustainable Computing Lab at Vienna University of Economics and Business and the Europa-University Viadrina are organising a 2-day workshop on:

Algorithmic Management: Designing systems which promote human autonomy
on 20-21 September 2018 at WU Vienna at Welthandelsplatz 1,1020 Vienna, Austria

This workshop is part of a wider research project on Algorithmic Management which studies the structural role of algorithms as forms of management in work environments, where automated digital platforms, such as Amazon, Uber or Clickworker manage the interaction of workers through algorithms. The process of assigning or changing a sequence of individual to be completed tasks is often a fully automated process. This means that algorithms may partly act like a manager, who exercises control over a large number of decentralized workers. The goal of our research project is to investigate the interplay of control and autonomy in a managerial regime, with a specific focus on the food-delivery sector.

Here is the current agenda for the workshop:

Further details about event registration and logistics can be found here: https://www.privacylab.at/event/algorithmic-management-designing-systems-which-promote-human-autonomy/