How Moments of Truth change the way we think about Privacy

July 12, 2018 in Lab updates

Esther Görnemann recently presented her work at the Lab as part of the Privacy & Us doctoral consortium in London. Her work provides an important perspective on the crucial role that the individual experience of Moments of Truth plays in understanding how human beings think about privacy and how under which circumstances they start actively protecting it. Here is a brief overview of her current research as well as a short introductory video.

During preliminary interview sessions, a number of internet and smartphone users talked to me about the surprising experience when they realized that personal information had been collected, processed an applied without their knowledge.
In these interviews and in countless furious online reports, users expressed concern about their device, often stating they felt taken by surprise, patronized or spied upon.

Some examples:

  • In an interview, a 73-year old man recalled that he was searching for medical treatment of prostate disorders on Google and was immediately confronted with related advertisements on the websites he visited subsequently. Some days later, he also started to receive email spam related to his search. He said “I felt appalled and spied upon” and ever since had begun to consider whether the search he was about to conduct might contain information he would rather keep for himself.
  • A Moment of Truth that made headlines in international news outlets was the story of Danielle from Portland who in early 2018 contacted a local TV station and reported that her Amazon Echo had recorded a private conversation between her and her husband and had sent it to a random person of the couple’s contact list who immediately called the couple back, to tell them what he had received. The couple turned to Amazon’s customer service, but the company was not immediately able to explain the incident. When she called the TV station, Danielle expressed her feelings: “I felt invaded. A total privacy invasion. I’m never plugging that device in again, because I can’t trust it.” While Amazon later explained the incident, saying the Echo mistakenly picked up several words from the conversation and interpreted them as a series of commands to record and send the audio, Danielle still claims the device had not prompted any confirmation or question.  
  • An interview participant recalled how he coincidently revealed that his smartphone photo gallery was automatically synchronized with the cloud service Dropbox. He described his reaction with the words “Dropbox automatically uploaded all my pictures in the cloud. It’s like stealing! […] Since then I’m wary. And for sure I will never use Dropbox again.”

Drawing from philosophical and sociological theories, this research project conceptualizes Moments of Truth as the event in which the arrival of new information results in a new interpretation of reality and a fundamental change of perceived alternatives of behavioural responses.

The notion of control or agency is one of several influential factors that mobilizes people and is key to understand reactions to Moments of Truth.

The goal of my research is to construct a model to predict subjects’ affective and behavioural responses to Moments of Truth. A central question is why some people display an increased motivation to protest and claim their rights, convince others, adapt usage patterns and take protective measures. Currently, I am looking at the central role that the perception of illegitimate inequality and the emotional state of anger play in mobilizing people to actively protect their privacy.

https://www.youtube.com/watch?v=jkq5TukhEu4

Ethics as an Escape from Regulation: From ethics-washing to ethics-shopping?

July 11, 2018 in Lab updates

I recently had the pleasure of attending a fantastic seminar on 10 Years of Profiling the European Citizen at Vrije Universiteit Brussel (VUB) which was organised by Mireille Hildebrand, Emre Bayamlıoğlu and her team there. As a result of this seminar I was asked to developed a short provocative article to present among scholars there. As there have been numerous requests for the article that I have received over the last few weeks, I decided to publish it here to ensure that it is accessible to a wider audience sooner rather than later. It will be published as part of an edited volume developed from the seminar with Amsterdam University Press later this year. If you have any comments, questions or suggestions, please do not hesitate to contact me: ben.wagner@wu.ac.at.

Ethics as an Escape from Regulation_2018

Workshop: Algorithmic Management: Designing systems which promote human autonomy

July 10, 2018 in Lab updates

The Privacy and Sustainable Computing Lab at Vienna University of Economics and Business and the Europa-University Viadrina are organising a 2-day workshop on:

Algorithmic Management: Designing systems which promote human autonomy
on 20-21 September 2018 at WU Vienna at Welthandelsplatz 1,1020 Vienna, Austria

This workshop is part of a wider research project on Algorithmic Management which studies the structural role of algorithms as forms of management in work environments, where automated digital platforms, such as Amazon, Uber or Clickworker manage the interaction of workers through algorithms. The process of assigning or changing a sequence of individual to be completed tasks is often a fully automated process. This means that algorithms may partly act like a manager, who exercises control over a large number of decentralized workers. The goal of our research project is to investigate the interplay of control and autonomy in a managerial regime, with a specific focus on the food-delivery sector.

Here is the current agenda for the workshop:

Further details about event registration and logistics can be found here: https://www.privacylab.at/event/algorithmic-management-designing-systems-which-promote-human-autonomy/ 

Council of Europe Study on Algorithms and Human Rights published

January 23, 2018 in Lab updates

After two years of negotiations in the Council of Europe Committee of experts on Internet Intermediaries (MSI-NET) the final documents of the expert group have finally been published. While the negations among the experts and governmental representatives in the group were not without difficulty, the final texts are relatively strong for what are still negotiated texts. Of particularly interest for experts working on the regulation of algorithms and automation is the Study on Algorithms and Human Rights which was drafted by Dr. Ben Wagner, one of the members of the lab and the Rapporteur of the Study.

The study attempts to take a broad approach to the human rights implications of algorithms, looking not just at Privacy but also Freedom of Assembly and Expression or the Right to a Fair trial in the context of the European Convention on Human Rights. While the regulatory responses suggested focus both on transparency and accountability, they also acknowledge that additional standard-setting measures and ethical frameworks will be required in order to ensure that human rights are safeguarded in automated technical systems. Here existing projects at the Lab such as P7000 or SPECIAL can provide an important contribution to the debate and ensure that not just privacy but that all human rights are safeguarded online.

The final version of the study is available to download here.

A GlobArt Workshop at WU’s Privacy & Sustainable Computing Lab November 10, 2017

November 15, 2017 in Lab updates

The Privacy & Sustainable Computing Lab together with GlobArt and Capital 300 hosted a Round Table discussion about artificial intelligence (AI), Ubiquitous Computing and the Question of Ethics on the 9th of November 2017 in Vienna. We were happy to have Jeffrey Sachs as our distinguished guest at this 4-hour intense Workshop on the future of AI. Other distinguished speakers were Bernhard Nessler from Johannes Kepler University Linz introducing to the limits of AI as well as Christopher Coenen unveiling the philosophical and historical roots of our desire to created artificial life.

The session and its speakers were structured by three main questions: What can general AI really do from a technical perspective?

What are the historical and philosophical roots of our desire for artificial life?

What sorts of ethical frameworks should AI adhere to?


The speakers argued that there is a need to differentiate between AI (Artificial Intelligence) and AGI (Artificial General Intelligence), where AI (like IBM Watson) needs quality training as well as quality data, lots of hardware and energy. In contrast, AGI is able to work with unstructured data and can have a better energy consumption rate. The other advantage of AGI is that it can react to un- foreseen situations and could be more easily applicable to various areas. One point that was stressed during the debate was that a lot of the terminology used in the scientific field of AI and AGI is borrowed from neuroscience and humans proper intelligence. Since machines – as experts confirmed – do not live up to this promise, using human-related terminology could lead to a misleading of the public as well as overly confident promises by industry.

It was discussed whether the term ”processing” might be more suitable than ”thinking” – at least at the current state.

Another phenomenon could be due to science fiction (Isaac Asimov, Neal Stephenson …) or Movies like ”Her” or ”Ex Machina”, where we rather should differentiate the terms AGI and Artificial Life. 
What are the socio-cultural, historical and philosophical roots of our desire to create a general artificial intelligence and to diffuse our environments with IT systems?
 ”The World, the Flesh & the Devil” a book published in 1929 by J. Desmond Bernal was a named inspiration for the concept of the ”mechanical man”. This book in turn provided an excellent introduction into the debate about transhumanism, which often goes hand in hand with the discussion about AI. Some prominent figures in technology – such as Ray Kurzweil or Elon Musk – frequently communicate transhumanistic ideas or philosophies.

What ethical guidance can we use as investors, researchers and developers or use in technical standards to ensure that AI does not get out of control? Concerning this question, there was a general agreement on the need to have some basic standards or even regulations of upcoming AI technology. Providing one example of such standards, the IEEE is working on Ethical Aligned Design guidelines under the leading phrase “Advancing Technology for Humanity.” Here particular hope is put into P7000 (Model Process for Addressing Ethical Concerns During System Design) that sets out to describe value based engineering. Value based engineering is an approach aiming to maximize value potential and minimize value harms for human beings in IT-rich environments. The ultimate goal of value based engineering is human wellbeing.

In conclusion, the event provided an excellent basis for further discussions about AI and it’s ethics for both experts and students alike.

Speakers at the Roundtable:

  • Christopher Coenen from the Institute for System Analysis and Technology Impact Assessments in Karlsruhe
  • Peter Hampson from the University of Oxford
  • Johannes Hoff from the University of London
  • Peter Lasinger from Capital 300
  • Konstantin Oppel from Xephor Solutions
  • Michael Platzer from Mostly AI
  • Bill Price who is a Resident Economist
  • Jeffrey Sachs from Columbia University
  • Robert Trappl from the Austrian Research Institute for AI
  • Georg Franck who is Professor Emeritus for Spacial Information Systems
  • Bernhard Nessler from Johannes Kepler University
  • Sarah Spiekermann – Founder of the Privacy & Sustainable Computing Lab and Professor at WU Vienna.

 

Welcome

September 22, 2017 in Lab updates

Welcome to the new Privacy and Sustainable Computing Lab blog!

We look forward to having further blog posts listed here in the next few weeks, giving visitors to this website a better insight on what we’re doing. If you have questions about the Lab please don’t hesitate to contact: ben.wagner@wu.ac.at.