Showing posts with label nudge. Show all posts
Showing posts with label nudge. Show all posts

Tuesday, April 11, 2023

Chatbotics - Coercion of the Senses

In a recent talk at CRASSH Cambridge, Marcus Tomalin described the inner workings of ChatGPT and similar large language models (LLM) as advanced matrix algebra, and asked whether we could really regard these systems as manifesting empathy. A controversial 2021 paper (which among other things resulted in Timnit Gebru's departure from Google) characterized large language models as stochastic parrots. Tomalin suggested we could also regard them as stochastic psychopaths, given the ability of (human) psychopaths to manipulate people. While psychopaths are generally thought to lack the kind of affective empathy that other humans possess, they are sometimes described as possessing cold empathy or dark empathy, which enables them to control other people's emotions.

If we want to consider whether an algorithm can display empathy, we could ask the same question about other constructed entities including organizations. Let's start with so-called empathetic marketing. Tomalin's example was the L'Oreal slogan because you're worth it.

If some instances of marketing are described in terms of "empathy", where is the empathy supposed to be located? In the case of the L'Oreal slogan, relevant affect may be situated not just in the consumer but also individuals working for the company. The copywriter who created the slogan in 1971 was Ilon Specht. Many years later she told Malcolm Gladwell, It was very personal. I can recite to you the whole commercial, because I was so angry when I wrote it. Gladwell quoted a friend of hers as saying Ilon had a degree of neurosis that made her very interesting

And then there is Joanne Dusseau, the model who first spoke the words.

“I took the tag line seriously,” she says. “I felt it all those thousands of times I said it. I never took it for granted. Over time, it changed me for the better.” (Vogue)

So if this is what it takes to produce and sustain one of the most effective and long-lasting marketing messages, what affective forces can large language models assemble? Or to put it another way, how might empathy emerge?

Another area where algorithmic empathy needs careful consideration is in mental health. There are many apps that claim to provide help to people with mental health issues. If these apps appear to display any kind of empathy with the user, this might increase the willingness of the user to accept any guidance or nudge. (In a psychotherapeutic context, this could be framed in terms of transference, with the algorithm playing the role of the "subject supposed to know".) Over the longer term, it might result in over-reliance or dependency.

One of the earliest recorded examples of a person confiding in a pseudo-therapeutic machine was when Joseph Weizenbaum's secretary was caught talking to ELIZA. Katherine Hayles offers an interesting interpretation of this incident, suggesting that ELIZA might have seemed to provide the dispassionate and non-judgemental persona that human therapists take years of training to develop.

I did some work a few years ago on technology ethics in relation to nudging. This was largely focused on the actions that the nudge might encourage. I need to go back and look at this topic in terms of empathy and affect. Watch this space.

 


Emily Bender et al, On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? (FAccT '21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, March 2021 Pages 610–623)

Malcolm Gladwell, True Colors: Hair dye and the hidden history of postwar America (New Yorker, 22 March 1999)

N Katherine Hayles, Trauma of Code (Critical Inquiry, Vol. 33, No. 1, Autumn 2006, pp. 136-157)

Naomi Pike, As L’OrĂ©al Paris’ Famed Tagline “Because You’re Worth It” Turns 50, The Message Proves As Poignant As Ever (Vogue, 8 March 2021)

Marcus Tomalin, Artificial Intelligence: Can Systems Like ChatGPT Automate Empathy (CRASSH Cambridge, 31 March 2023) 

Related posts: Towards Chatbot Ethcs (May 2019), The Sad Reality of Chatbotics (December 2021), From ChatGPT to infinite sets (May 2023)

Thursday, December 23, 2021

The Sad Reality of Chatbotics

As I noted in my previous post on chatbotics Towards Chatbot Ethics (May 2019), the chatbot has sometimes been pitched as a kind of Holy Grail. Which prompts the question I discussed before - whom shall the chatbot serve?

Chatbots are designed to serve their master - and this is generally the organization that runs them, not necessarily the consumer, even if you have paid good money to have one of these curious cylinders in your home. For example, Amazon's Alexa is supposed to encourage consumers to access other Amazon services, including retail and entertainment -  and this is how Amazon expects to make a financial return on the sale and distribution of these devices.

But how well do they work even for them? The journalist Priya Anand (who tweets at @PriyasIdeas) has been following this question for a while. Back in 2018, she talked to digital marketing experts who warned that voice shopping was unlikely to take off quickly. Her latest article notes the attempts by Amazon Alexa to nudge consumers into shopping, which may simply cause some frustrated consumers to switch the thing off altogether. Does this explain the apparently high attrition rates?

If you are selling a device at a profit, it may not matter if people don't use it much. But if you are selling a device at an initial loss, expecting to recoup the money when the device is used, then you have to find ways of getting people to use the thing. 

Perhaps if Amazon can use its Machine Learning chops to guess what we want before we've even said anything, then the chatbots can cut out some of the annoying chatter. Apparently Alexa engineers think this would be more natural. Others might argue Natural's Not In It. (Coercion of the senses? We're not so gullible.)



Priya Anand, The Reality Behind Voice Shopping Hype (The Information, 6 August 2018)

Priya Anand, Amazon’s Alexa Stalled With Users as Interest Faded, Documents Show (Bloomberg, 22 December 2021)

Daphne Leprince-Ringuet, Alexa can now guess what you want before you even ask for it (ZDNet, 13 November 2020)

Tom McKay, Report: Almost Nobody Is Using Amazon's Alexa to Actually Buy Stuff (Gizmodo, 6 August 2018)

Chris Matyszczyk, Amazon wants you to keep quiet, for a brilliantly sinister reason (ZDNet, 4 November 2021)

Related posts: Towards Chatbot Ethics (May 2019), Technology and the Discreet Cough (September 2019), Chatbiotics: Coercion of the Senses (April 2023)

Thursday, July 18, 2019

Robust Against Manipulation

As algorithms get more sophisticated, so do the opportunities to trick them. An algorithm can be forced or nudged to make incorrect decisions, in order to yield benefits to a (hostile) third party. John Bates, one of the pioneers of Complex Event Processing, has raised fears of algorithmic terrorism, but algorithmic manipulation may also be motivated by commercial interests or simple vandalism.

An extreme example of this could be a road sign that reads STOP to humans but is misread as something else by self-driving cars. Another example might be false signals that are designed to trigger algorithmic trading and thereby nudge markets. Given the increasing reliance on automatic screening machines at airports and elsewhere, there are obvious incentives for smugglers and terrorists to develop ways of fooling these machines - either to get their stuff past the machines, or to generate so many false positives that the machines aren't taken seriously. And of course email spammers are always looking for ways to bypass the spam filters.

"It will also become increasingly important that AI algorithms be robust against manipulation. A machine vision system to scan airline luggage for bombs must be robust against human adversaries deliberately searching for exploitable flaws in the algorithm - for example, a shape that, placed next to a pistol in one's luggage, would neutralize recognition of it. Robustness against manipulation is an ordinary criterion in information security; nearly the criterion. But it is not a criterion that appears often in machine learning journals, which are currently more interested in, e.g., how an algorithm scales upon larger parallel systems." [Bostrom and Yudkowsky]

One kind of manipulation involves the construction of misleading examples (known in the literature as "adversarial examples"). For example, an example that exploits the inaccuracies of a specific image recognition algorithm to produce an image that will be incorrectly classified, thus producing an incorrect action (or suppressing the correct action).

Another kind of manipulation involves poisoning the model - deliberately feeding a machine learning algorithm with biased or bad data, in order to disrupt or skew its behaviour. (Historical analogy: manipulation of pop music charts.)

We have to assume that some bad actors will have access to the latest technologies, and will themselves be using machine learning and other techniques to design these attacks, and this sets up an arms race between the good guys and the bad guys. Is there any way to keep advanced technologies from getting in the wrong hands?

In the security world, people are familiar with the concept of Distributed Denial of Service (DDOS). But perhaps this now becomes Distributed Distortion of Service. Which may be more subtle but no less dangerous.

While there are strong arguments for algorithmic transparency of automated systems, some people may be concerned that transparency will aid such attacks. The argument here is that the more adversaries can discover about the algorithm and its training data, the more opportunities for manipulation. But it would be wrong to conclude that we should keep algorithms safe by keeping them secret ("security through obscurity"). A better conclusion would be that transparency should be a defence against manipulation, by making it easier for stakeholders to detect and counter such attempts.




John Bates, Algorithmic Terrorism (Apama, 4 August 2010). To Catch an Algo Thief (Huffington Post, 26 Feb 2015)

Nick Bostrom and Eliezer Yudkowsky, The Ethics of Artificial Intelligence (2011)

Ian Goodfellow, Patrick McDaniel and Nicolas Papernot, Making Machine Learning Robust Against Adversarial Inputs (Communications of the ACM, Vol. 61 No. 7, July 2018) Pages 56-66. See also video interview with Papernot.

Neil Strauss, Are Pop Charts Manipulated? (New York Times, 25 January 1996)

Wikipedia: Security Through Obscurity

Related posts: The Unexpected Happens (January 2017)

Tuesday, July 16, 2019

Nudge Technology

People are becoming aware of the ways in which AI and big data can be used to influence people, in accordance with Nudge Theory. Individuals can be nudged to behave in particular ways, and large-scale social systems (including elections and markets) can apparently be manipulated. In other posts, I have talked about the general ethics of nudging systems. This post will concentrate on the technological aspects.

Technologically mediated nudges are delivered by a sociotechnical system we could call a Nudge System. This system might contain several different algorithms and other components, and may even have a human-in-the-loop. Our primary concern here is about the system as a whole.

As an example, I am going to consider a digital advertisement in a public place, which shows anti-smoking messages whenever it detects tobacco smoke.

Typically, a nudge system would perform several related activities.

1. There would be some mechanism for "reading" the situation. For example, detecting the events that might trigger a nudge, as well as determining the context. This might be a simple sense-and-respond or it might include some more sophisticated analysis, using some kind of model. There is typically an element of surveillance here. In our example, let us imagine that the system is able to distinguish different brands of cigarette, and determine how many people are smoking in its vicinity.

2. Assuming that there was some variety in the nudges produced by the system, there would be a mechanism for selecting or constructing a specific nudge, using a set of predefined nudges or nudge templates. For example, different anti-smoking messages for the expensive brands versus the cheap brands. Combined with other personal data, the system might even be able to name and shame the smokers.

3. There would then be a mechanism for delivering or otherwise executing the nudge. For example private (to a person's phone) or public (via a display board). We might call this the nudge agent. In some cases, the nudge may be delivered by a human, but prompted by an intelligent system. If the nudge is publicly visible, this could allow other people to infer the circumstances leading to the nudge - therefore a potential breach of privacy. (For example, letting your friends and family know that you were having a sneaky cigarette, when you had told them that you had given up.)

4. In some cases, there might be a direct feedback loop, giving the system immediate data about the human response to the nudge. Obviously this will not always be possible. Nevertheless, we would expect the system to retain a record of the delivered nudges, for future analysis. To support multiple feedback tempos (as discussed in my work on Organizational Intelligence) there could be multiple components performing the feedback and learning function. Typically, the faster loops would be completely automated (autonomic) while the slower loops would have some human interaction.

There would typically be algorithms to support each of these activities, possibly based on some form of Machine Learning, and there is the potential for algorithmic bias at several points in the design of the system, as well as various forms of inaccuracy (for example false positives, where the system incorrectly detects tobacco smoke). More information doesn't always mean better information - for example, someone might design a sensor that would estimate the height of the smoker, in order to detect underage smokers - but this obviously introduces new possibilities of error.

In many cases, there will be a separation between the technology engineers who build systems and components, and the social engineers who use these systems and components to produce some commercial or social effects. This raises two different ethical questions.

Firstly, what does responsible use of nudge technology look like - in other words, what are the acceptable ways that nudge technology can be deployed. What purposes, what kind of content, the need for testing and continuous monitoring to detect any signals of harm or bias, and so on. Should the nudge be private to the nudgee, or could it be visible to others? What technical and organizational controls should be in place before the nudge technology is switched on?

And secondly, what does responsible nudge technology look like - in other words, one that can be used safely and reliably, with reasonable levels of transparency and user control.

We may note that nudge technologies can be exploited by third parties with a commercial or political intent. For example, there are constant attempts to trick or subvert the search and recommendation algorithms used by the large platforms, and Alex Hern recently reported on Google's ongoing battle to combat misinformation and promotion of extreme content. So one of the requirements of responsible nudge technology is being Robust Against Manipulation.

We may also note that if there is any bias anywhere, this may either be inherent in the design of the nudge technology itself, or may be introduced by the users of the nudge technology when customizing it for a specific purpose. For example, nudges may be deliberately worded as "dog whistles" - designed to have a strong effect on some subjects while being ignored by others - and this can produce significant and possibly unethical bias in the working of the system. But this bias is not in the algorithm but in the nudge templates, and there may be other ways in which bias is relevant to nudging in general, so the question of algorithmic bias is not the whole story.



Alex Hern, Google tweaked algorithm after rise in US shootings (Guardian, 2 July 2019)

Wikipedia: Nudge Theory

Related posts: Organizational Intelligence in the Control Room (October 2010), On the Ethics of Technologically Mediated Nudge (May 2019), The Nudge as a Speech Act (May 2019), Algorithms and Governmentality (July 2019), Robust Against Manipulation (July 2019)