Tuesday, July 16, 2019

Nudge Technology

People are becoming aware of the ways in which AI and big data can be used to influence people, in accordance with Nudge Theory. Individuals can be nudged to behave in particular ways, and large-scale social systems (including elections and markets) can apparently be manipulated. In other posts, I have talked about the general ethics of nudging systems. This post will concentrate on the technological aspects.

Technologically mediated nudges are delivered by a sociotechnical system we could call a Nudge System. This system might contain several different algorithms and other components, and may even have a human-in-the-loop. Our primary concern here is about the system as a whole.

As an example, I am going to consider a digital advertisement in a public place, which shows anti-smoking messages whenever it detects tobacco smoke.

Typically, a nudge system would perform several related activities.

1. There would be some mechanism for "reading" the situation. For example, detecting the events that might trigger a nudge, as well as determining the context. This might be a simple sense-and-respond or it might include some more sophisticated analysis, using some kind of model. There is typically an element of surveillance here. In our example, let us imagine that the system is able to distinguish different brands of cigarette, and determine how many people are smoking in its vicinity.

2. Assuming that there was some variety in the nudges produced by the system, there would be a mechanism for selecting or constructing a specific nudge, using a set of predefined nudges or nudge templates. For example, different anti-smoking messages for the expensive brands versus the cheap brands. Combined with other personal data, the system might even be able to name and shame the smokers.

3. There would then be a mechanism for delivering or otherwise executing the nudge. For example private (to a person's phone) or public (via a display board). We might call this the nudge agent. In some cases, the nudge may be delivered by a human, but prompted by an intelligent system. If the nudge is publicly visible, this could allow other people to infer the circumstances leading to the nudge - therefore a potential breach of privacy. (For example, letting your friends and family know that you were having a sneaky cigarette, when you had told them that you had given up.)

4. In some cases, there might be a direct feedback loop, giving the system immediate data about the human response to the nudge. Obviously this will not always be possible. Nevertheless, we would expect the system to retain a record of the delivered nudges, for future analysis. To support multiple feedback tempos (as discussed in my work on Organizational Intelligence) there could be multiple components performing the feedback and learning function. Typically, the faster loops would be completely automated (autonomic) while the slower loops would have some human interaction.

There would typically be algorithms to support each of these activities, possibly based on some form of Machine Learning, and there is the potential for algorithmic bias at several points in the design of the system, as well as various forms of inaccuracy (for example false positives, where the system incorrectly detects tobacco smoke). More information doesn't always mean better information - for example, someone might design a sensor that would estimate the height of the smoker, in order to detect underage smokers - but this obviously introduces new possibilities of error.

In many cases, there will be a separation between the technology engineers who build systems and components, and the social engineers who use these systems and components to produce some commercial or social effects. This raises two different ethical questions.

Firstly, what does responsible use of nudge technology look like - in other words, what are the acceptable ways that nudge technology can be deployed. What purposes, what kind of content, the need for testing and continuous monitoring to detect any signals of harm or bias, and so on. Should the nudge be private to the nudgee, or could it be visible to others? What technical and organizational controls should be in place before the nudge technology is switched on?

And secondly, what does responsible nudge technology look like - in other words, one that can be used safely and reliably, with reasonable levels of transparency and user control.

We may note that nudge technologies can be exploited by third parties with a commercial or political intent. For example, there are constant attempts to trick or subvert the search and recommendation algorithms used by the large platforms, and Alex Hern recently reported on Google's ongoing battle to combat misinformation and promotion of extreme content. So one of the requirements of responsible nudge technology is being Robust Against Manipulation.

We may also note that if there is any bias anywhere, this may either be inherent in the design of the nudge technology itself, or may be introduced by the users of the nudge technology when customizing it for a specific purpose. For example, nudges may be deliberately worded as "dog whistles" - designed to have a strong effect on some subjects while being ignored by others - and this can produce significant and possibly unethical bias in the working of the system. But this bias is not in the algorithm but in the nudge templates, and there may be other ways in which bias is relevant to nudging in general, so the question of algorithmic bias is not the whole story.



Alex Hern, Google tweaked algorithm after rise in US shootings (Guardian, 2 July 2019)

Wikipedia: Nudge Theory

Related posts: Organizational Intelligence in the Control Room (October 2010), On the Ethics of Technologically Mediated Nudge (May 2019), The Nudge as a Speech Act (May 2019), Algorithms and Governmentality (July 2019), Robust Against Manipulation (July 2019)


No comments:

Post a Comment