Showing posts with label chatbotics. Show all posts
Showing posts with label chatbotics. Show all posts

Monday, July 07, 2025

From Anxiety to ChatGPT

The following story presents a nice reversal of the previous post in my chatbotics series, From ChatGPT to Anxiety.

A boss at Xbox (owned by Microsoft) went onto LinkyDin (owned by Microsoft) to recommend the use of chatbots (either Microsoft Copilot or OpenAI's ChatGPT) for any employee made redundant (and consequently anxious) as Microsoft switches its focus towards artificial intelligence. He suggested that chatbots could help reduce the emotional and cognitive load that comes with job loss.

Brandon Sheffield posted a screenshot of the LinkyDin post on BlueSky, commenting Something I've realized over time is people in general lack the ability to think in a broader scope and include context and eventualities. But after thousands of people get laid off from your company maybe don't suggest they turn to the thing you're trying to replace them with for solace.

It may well be the case that chatbots are capable of higher levels of emotional intelligence than some tech bosses, and many people might prefer to confide in a chatbot than a representative of the company that has just fired them, whether for practical advice or mental health support. As Bilquise et al report, the emotional intelligence of chatbots is generally assessed in terms of accurately detecting the user’s emotion and generating emotionally relevant responses, while Ruse et al have explored how the use of mental health apps has changed the way people think about mental health more generally. (See also my commentary on the Ruse article.)


Ghazala Bilquise, Samar Ibrahim and Khaled Shaalan, Emotionally Intelligent Chatbots: A Systematic Literature Review (Human Behavior and Emerging Technologies 2022)

Charlotte Edwards, Xbox producer tells staff to use AI to ease job loss pain (BBC 7 July 2025)

Lili Jamali, Microsoft to cut up to 9,000 more jobs as it invests in AI (BBC 3 July 2025)

Luke Plunkett, Xbox Producer Recommends Laid Off Workers Should Use AI To ‘Help Reduce The Emotional And Cognitive Load That Comes With Job Loss’ (Aftermath, 4 July 2025) HT Brandon Sheffield

Jesse Ruse, Ernst Schraube, and Pual Rhodes, Left to their own devices: The significance of mental health apps on the construction of therapy and care (Subjectivity 2024)

Richard Veryard, On the Subjectivity of Devices (Subjectivity 2024)

Friday, March 07, 2025

From ChatGPT to Anxiety

There has been a lot of attention to the ways in which algorithms may either promote or alleviate anxiety. A recent article in the journal Subjectivity with the anxiety-laden title Left to their own devices looked at mental health apps, and the journal editors were kind enough to publish my commentary on this article exploring the subjectivity of devices.

In addition to understanding how humans may experience their interactions with algorithms, we can also ask how the algorithm experiences these interactions. This experience can be analysed not only at the cognitive level but also at the affective level. It turns out that if a lot of stressful material is loaded into ChatGPT, this causes the algorithm to produce what looks like an anxious response.

The training for human therapists typically includes developing the ability to contain this kind of anxiety and to safely unload it later. Whether and how mental health apps can develop this kind of ability is currently an open question, with important ethical implications. Meanwhile, there are some promising indications that an anxious chatbot response may be calmed by giving mindfulness exercises to the chatbot.

This certainly puts a new twist on the topic of the subjectivity of devices.

 


 

Ziv Ben-Zion et al, Assessing and alleviating state anxiety in large language models (npj Digital Medicine 8/132, 2025)

Kyle Chayka, The Age of Algorithmic Anxiety (New Yorker, July 25 2022)

Jesse Ruse, Ernst Schraube and Paul Rhodes, Left to their own devices: The significance of mental health apps on the construction of therapy and care. (Subjectivity 2024)

Richard Veryard, On the Subjectivity of Devices (Subjectivity 2024) available here https://rdcu.be/d8PSt

Brandon Vigliarolo, Maybe cancel that ChatGPT therapy session – doesn't respond well to tales of trauma (The Register, 5 Mar 2025)


Monday, April 15, 2024

From ChatGPT to Entropy

Large language models (LLM) are trained on large quantities of content. And increasing amounts of available content is generated by large language models. This sets up a form of recursion in which AI models increasingly rely on such content, producing an irreversible degradation in the quality of AI-generated content. This has been described as a form of entropy. Shumailov et al call it Model Collapse.

There is an interesting comparison between this and data poisoning, where an AI model is deliberately polluted with bad data, often as an external attack, to influence and corrupt its output. Whereas model collapse doesn't involve a hostile attack, and may reflect a form of self-pollution. 

Is there a technical or sociotechnical fix for this? This seems to require limiting the training data - either sticking to the original data source, or only allowing new training data that can be verified as non LLM-generated. Shumailov et al appeal to some form of "community-wide coordination ... to resolve questions of provenance", but this seems somewhat optimistic.

Dividing content by provenance is of course a non-trivial challenge, and automatic filters typically flag content from non-native speakers as AI-generated, which in turn further narrows the data available. Thus Shumailov et al conclude "it may become increasingly difficult to train newer versions of LLMs without access to data that was crawled from the Internet prior to the mass adoption of the technology, or direct access to data generated by humans at scale".

What are the implications of this for the attainment of the promised benefits of AI? Imre Lakatos once suggested a distinction between progressive research programmes and degenerating ones: a degenerating programme either fails to make interesting (novel) predictions, or becomes increasingly unable to make true predictions. Many years ago, Hubert Dreyfus made exactly this criticism of AI. And to the extent that Large Language Models and other forms of AI are vulnerable to model collapse and entropy, this would again make AI look like a degenerating programme.

 


Thomas Claburn, What is Model Collapse and how to avoid it (The Register, 26 January 2024)

Ian Sample, Programs to detect AI discriminate against non-native English speakers, shows study (Guardian, 10 Jul 2023)

Ilia Shumailov, Zakhar Shumaylov, Yiren Zhao, Yarin Gal, Nicolas Papernot and Ross Anderson, The Curse of Recursion: Training on Generated Data Makes Models Forget (arXiv:2305.17493v2, 31 May 2023)

David Sweenor, AI Entropy: The Vicious Circle of AI-Generated Content (Linked-In, 28 August 2023)

Stanford Encyclopedia of Philosophy: Imre Lakatos

Wikipedia: Data Poisoning, Model Collapse, Self Pollution

Related posts: From ChatGPT to Infinite Sets (May 2023), ChatGPT and the Defecating Duck (Sept 2023), Creativity and Recursivity (Sept 2023)

Sunday, May 14, 2023

From ChatGPT to Infinite Sets

In a recent article on ChatGPT, Evgeny Morozov mentioned the psychoanalyst Ignacio Matte Blanco. Many psychoanalysts regard the unconscious as structured like a language, and Matte Blanco is known for developing a mathematical model of the unconscious based on infinite sets. Meanwhile, in a recent talk at CRASSH Cambridge, Marcus Tomalin described the inner workings of ChatGPT and similar large language models (LLM) as advanced matrix algebra. So what can Matte Blanco's model tell us about ChatGPT and the mathematics that underpins it?

Eric Rayner explains Matte Blanco's model as follows

The unconscious, since it can simultaneously contain great numbers of generalized ideas, notions, propositional functions or emotional conceptions is, as it were, a capacious dimensional mathematician. Rayner p 93

Between 2012 and 2018, Fionn Murtagh published several papers on the relationship between Matte Blanco's model and the mathematics underpinning data analytics. He notes one of the key elements of the model as the fact that "the unconscious does not know individuals but only classes or propositional functions which define the class". 

Apart from Professor Murtagh's papers, I have not found any other references to Matte Blanco in this context. I have however found several papers that reference Lacan, including an interesting one by Luca Possati who argues that the originality of AI lies in its profound connection with the human unconscious.

The ability of large language models to become disconnected from some conventional notion of reality is typically called hallucination. Naomi Klein objects to the anthropomorphic implications of this word, and her point is well taken given the political and cultural context in which it is generally used, but the word nonetheless seems appropriate if we are to follow a psychoanalytic line of inquiry.

Without the self having a containing framework of awareness of asymmetrical relations play breaks down into delusion. Rayner p 37

Perhaps the most exemplary situation of hallucination is where chatbots imagine facts about themselves. In his talk, Dr Tomalin reports a conversation he had with the chatbot BlenderBot 3. He tells it that his dog had just died; BlenderBot 3 replies that it has two dogs, named Baxter and Maxwell. No doubt a human psychopath might consciously lie about such matters in order to fake empathy, but even if we regard the chatbot as a stochastic psychopath (as Tomalin suggests) it is not clear that the chatbot is consciously lying. If androids can dream of electric sheep, why can't chatbots dream of dog ownership?

Or to put it another way, and using Matte Blanco's bi-logic, if the unconscious is structured like a language, symmetry demands that language is structured like the unconscious.



Naomi Klein, AI machines aren’t ‘hallucinating’. But their makers are (Guardian, 8 May 2023)

Evgeny Morozov, The problem with artificial intelligence? It’s neither artificial nor intelligent (Guardian, 30 March 2023)

Fionn Murtagh, Ultrametric Model of Mind, I: Review (2012) https://arxiv.org/abs/1201.2711

Fionn Murtagh, Ultrametric Model of Mind, II: Review (2012) https://arxiv.org/abs/1201.2719

Fionn Murtagh, Mathematical Representations of Matte Blanco’s Bi-Logic, based on Metric Space and Ultrametric or Hierarchical Topology: Towards Practical Application (Language and Psychoanalysis, 2014, 3 (2), 40-63) 

Luca Possati, Algorithmic unconscious: why psychoanalysis helps in understanding AI (Palgrave Communications, 2020)

Eric Rayner , Unconscious Logic: An introduction to Matte Blanco's Bi-Logic and its uses (London: Routledge, 1995)

Marcus Tomalin, Artificial Intelligence: Can Systems Like ChatGPT Automate Empathy (CRASSH Cambridge, 31 March 2023)

Stephen Wolfram, What is ChatGPT doing … and why does it work? (14 February 2023)


Related post: Chatbotics: Coercion of the Senses (April 2023), The Mad Hatter Works Out (July 2023)

Tuesday, April 11, 2023

Chatbotics - Coercion of the Senses

In a recent talk at CRASSH Cambridge, Marcus Tomalin described the inner workings of ChatGPT and similar large language models (LLM) as advanced matrix algebra, and asked whether we could really regard these systems as manifesting empathy. A controversial 2021 paper (which among other things resulted in Timnit Gebru's departure from Google) characterized large language models as stochastic parrots. Tomalin suggested we could also regard them as stochastic psychopaths, given the ability of (human) psychopaths to manipulate people. While psychopaths are generally thought to lack the kind of affective empathy that other humans possess, they are sometimes described as possessing cold empathy or dark empathy, which enables them to control other people's emotions.

If we want to consider whether an algorithm can display empathy, we could ask the same question about other constructed entities including organizations. Let's start with so-called empathetic marketing. Tomalin's example was the L'Oreal slogan because you're worth it.

If some instances of marketing are described in terms of "empathy", where is the empathy supposed to be located? In the case of the L'Oreal slogan, relevant affect may be situated not just in the consumer but also individuals working for the company. The copywriter who created the slogan in 1971 was Ilon Specht. Many years later she told Malcolm Gladwell, It was very personal. I can recite to you the whole commercial, because I was so angry when I wrote it. Gladwell quoted a friend of hers as saying Ilon had a degree of neurosis that made her very interesting

And then there is Joanne Dusseau, the model who first spoke the words.

“I took the tag line seriously,” she says. “I felt it all those thousands of times I said it. I never took it for granted. Over time, it changed me for the better.” (Vogue)

So if this is what it takes to produce and sustain one of the most effective and long-lasting marketing messages, what affective forces can large language models assemble? Or to put it another way, how might empathy emerge?

Another area where algorithmic empathy needs careful consideration is in mental health. There are many apps that claim to provide help to people with mental health issues. If these apps appear to display any kind of empathy with the user, this might increase the willingness of the user to accept any guidance or nudge. (In a psychotherapeutic context, this could be framed in terms of transference, with the algorithm playing the role of the "subject supposed to know".) Over the longer term, it might result in over-reliance or dependency.

One of the earliest recorded examples of a person confiding in a pseudo-therapeutic machine was when Joseph Weizenbaum's secretary was caught talking to ELIZA. Katherine Hayles offers an interesting interpretation of this incident, suggesting that ELIZA might have seemed to provide the dispassionate and non-judgemental persona that human therapists take years of training to develop.

I did some work a few years ago on technology ethics in relation to nudging. This was largely focused on the actions that the nudge might encourage. I need to go back and look at this topic in terms of empathy and affect. Watch this space.

 


Emily Bender et al, On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? (FAccT '21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, March 2021 Pages 610–623)

Malcolm Gladwell, True Colors: Hair dye and the hidden history of postwar America (New Yorker, 22 March 1999)

N Katherine Hayles, Trauma of Code (Critical Inquiry, Vol. 33, No. 1, Autumn 2006, pp. 136-157)

Naomi Pike, As L’OrĂ©al Paris’ Famed Tagline “Because You’re Worth It” Turns 50, The Message Proves As Poignant As Ever (Vogue, 8 March 2021)

Marcus Tomalin, Artificial Intelligence: Can Systems Like ChatGPT Automate Empathy (CRASSH Cambridge, 31 March 2023) 

Related posts: Towards Chatbot Ethcs (May 2019), The Sad Reality of Chatbotics (December 2021), From ChatGPT to infinite sets (May 2023)

Thursday, December 23, 2021

The Sad Reality of Chatbotics

As I noted in my previous post on chatbotics Towards Chatbot Ethics (May 2019), the chatbot has sometimes been pitched as a kind of Holy Grail. Which prompts the question I discussed before - whom shall the chatbot serve?

Chatbots are designed to serve their master - and this is generally the organization that runs them, not necessarily the consumer, even if you have paid good money to have one of these curious cylinders in your home. For example, Amazon's Alexa is supposed to encourage consumers to access other Amazon services, including retail and entertainment -  and this is how Amazon expects to make a financial return on the sale and distribution of these devices.

But how well do they work even for them? The journalist Priya Anand (who tweets at @PriyasIdeas) has been following this question for a while. Back in 2018, she talked to digital marketing experts who warned that voice shopping was unlikely to take off quickly. Her latest article notes the attempts by Amazon Alexa to nudge consumers into shopping, which may simply cause some frustrated consumers to switch the thing off altogether. Does this explain the apparently high attrition rates?

If you are selling a device at a profit, it may not matter if people don't use it much. But if you are selling a device at an initial loss, expecting to recoup the money when the device is used, then you have to find ways of getting people to use the thing. 

Perhaps if Amazon can use its Machine Learning chops to guess what we want before we've even said anything, then the chatbots can cut out some of the annoying chatter. Apparently Alexa engineers think this would be more natural. Others might argue Natural's Not In It. (Coercion of the senses? We're not so gullible.)



Priya Anand, The Reality Behind Voice Shopping Hype (The Information, 6 August 2018)

Priya Anand, Amazon’s Alexa Stalled With Users as Interest Faded, Documents Show (Bloomberg, 22 December 2021)

Daphne Leprince-Ringuet, Alexa can now guess what you want before you even ask for it (ZDNet, 13 November 2020)

Tom McKay, Report: Almost Nobody Is Using Amazon's Alexa to Actually Buy Stuff (Gizmodo, 6 August 2018)

Chris Matyszczyk, Amazon wants you to keep quiet, for a brilliantly sinister reason (ZDNet, 4 November 2021)

Related posts: Towards Chatbot Ethics (May 2019), Technology and the Discreet Cough (September 2019), Chatbiotics: Coercion of the Senses (April 2023)

Sunday, May 05, 2019

Towards Chatbot Ethics

When over-enthusiastic articles describe chatbotics as the Holy Grail (for digital marketing or online retail or whatever), I should normally ignore this as the usual hyperbole. But in this case, I'm going to take it literally. Let me explain.

As followers of the Parsifal legend will know, at a critical point in the story Parsifal fails to ask the one question that matters: "Whom does the Grail serve?"

And anyone who wishes to hype chatbots as some kind of "holy grail" must also ask the same question: "Whom does the Chatbot serve?" IBM puts this at the top of its list of ethical questions for chatbots, as does @ashevat (formerly with Slack).

To the extent that a chatbot is providing information and advice, it is subject to many of the same ethical considerations as any other information source - is the information complete, truthful and unbiased, or does it serve the information provider's commercial interest? Perhaps the chatbot (or rather its owner) is getting a commission if you eat at the recommended restaurant, just as hotel concierges have always done. A restaurant review in an online or traditional newspaper may appear to be independent, but restaurants have many ways of rewarding favourable reviews even without cash changing hands. You might think it is ethical for this to be transparent.

But an important difference between a chatbot and a newspaper article is that the chatbot has a greater ability to respond to the particular concerns and vulnerabilities of the user. Shiva Bhaska discusses how this power can be used for manipulation and even intimidation. And making sure the user knows that they are talking to a bot rather than a human does not guard against an emotional reaction: Joseph Weizenbaum was one of the first in the modern era to recognize this.

One area where particularly careful ethical scrutiny is required is the use of chatbots for mental health support. Obviously there are concerns about efficacy and safety as well as privacy, and such systems need to undergo clinical trials for efficacy and potential adverse outcomes, just like any other medical intervention. Kira Kretzschmar et al argue that it is also essential that these platforms are specifically programmed to discourage over-reliance, and that users are encouraged to seek human support in the case of an emergency.


Another ethical problem with chatbots is related to the Weasley doctrine (named after Arthur Weasley in Harry Potter and the Chamber of Secrets):
"Never trust anything that can think for itself if you can't see where it keeps its brain."
Many people have installed these curious cylindrical devices in their homes, but is that where the intelligence is actually located? When a private conversation was accidentally transmitted from Portland to Seattle, engineers at Amazon were able to inspect the logs, coming up with a somewhat implausible explanation as to how this might have occurred. Obviously this implies a lack of boundaries between the device and the manufacturer. And as @geoffreyfowler reports, chatbots don't only send recordings of your voice back to Master Control, they also send status reports from all your other connected devices.

Smart home, huh? Smart for whom? Transparency for whom? Or to put it another way, whom does the chatbot serve?





Shiva Bhaskar, The Chatbots That Will Manipulate Us (30 June 2017)

Geoffrey A. Fowler, Alexa has been eavesdropping on you this whole time (Washington Post, 6 May 2019) HT@hypervisible

Sidney Fussell, Behind Every Robot Is a Human (The Atlantic, 15 April 2019)

Tim Harford, Can a computer fool you into thinking it is human? (BBC 25 September 2019)

Gary Horcher, Woman says her Amazon device recorded private conversation, sent it out to random contact (25 May 2018)

Kira Kretzschmar et al, Can Your Phone Be Your Therapist? Young People’s Ethical Perspectives on the Use of Fully Automated Conversational Agents (Chatbots) in Mental Health Support (Biomed Inform Insights, 11, 5 March 2019)

Trips Reddy, The code of ethics for AI and chatbots that every brand should follow (IBM 15 October 15, 2017)

Amir Shevat, Hard questions about bot ethics (Slack Platform Blog, 12 October 2016)

Tom Warren, Amazon explains how Alexa recorded a private conversation and sent it to another user (The Verge, 24 May 2018)

Joseph Weizenbaum, Computer Power and Human Reason (WH Freeman, 1976)


Related posts: Understanding the Value Chain of the Internet of Things (June 2015), Whom does the technology serve? (May 2019), The Road Less Travelled (June 2019), The Allure of the Smart Home (December 2019), The Sad Reality of Chatbotics (December 2021)

updated 4 October 2019

Thursday, March 24, 2016

Artificial Belligerence

Back in the last century, when I was a postgraduate student in the Department of Computing and Control at Imperial College, some members of the department were involved in building an interactive exhibit for the Science Museum next door.

As I recall, the exhibit was designed accept free text from members of the public, and would produce semi-intelligent responses, partly based on the users' input.

Anticipating that young visitors might wish to trick the software into repeating rude words, an obscenity filter was programmed into the software. When some of my fellow students managed to hack into the obscenity file, they were taken aback by the sheer quantity and obscurity of the vocabulary that the academic staff (including some innocent-looking female lecturers) were able to blacklist.

The chatbot recently launched onto Twitter and other social media platforms by Microsoft appears to be a more sophisticated version of that exhibit at the Science Museum so many years ago. But without the precautions.

Within 24 hours, following a series of highly offensive tweets, the chatbot (known as Tay) was taken down. Many of the offensive tweets have been deleted.


Before

Matt Burgess, Microsoft's new chatbot wants to hang out with millennials on Twitter (Wired, 23 March 2016)

Hugh Langley, We played 'Would You Rather' with Tay, Microsoft's AI chat bot (TechRadar, 23 March 2016)

Nick Summers, Microsoft's Tay is an AI chat bot with 'zero chill' (Engadget, 23 March 2016)


Just After

Peter Bright, Microsoft terminates its Tay AI chatbot after she turns into a Nazi (Ars Technica

Andrew Griffin, Tay Tweets: Microsoft AI chatbot designed to learn from Twitter ends up endorsing Trump and praising Hitler (Independent, 24 March 2016)

Alex Hern, Microsoft scrambles to limit PR damage over abusive AI bot Tay (Guardian, 24 March 2016)

Elle Hunt, Tay, Microsoft's AI chatbot, gets a crash course in racism from Twitter (Guardian, 24 March 2016)

Jane Wakefield, Microsoft chatbot is taught to swear on Twitter (BBC News, 24 March 2016)


"So Microsoft created a chat bot that so perfectly emulates a teenager that it went off spouting offensive things just for the sake of getting attention? I would say the engineers in Redmond succeeded beyond their wildest expectations, myself." (Ars Praetorian)


What a difference a day makes!


Some Time After

Peter Lee, Learning from Tay's Introduction (Official Microsoft Blog, 25 March 2016)

Sam Shead, Microsoft says it faces 'difficult' challenges in AI design after chat bot Tay turned into a genocidal racist (Business Insider, 26 March 2016)

Paul Mason, The racist hijacking of Microsoft’s chatbot shows how the internet teems with hate (Guardian, 29 March 2016)

Dina Bass, Clippy’s Back: The Future of Microsoft Is Chatbots (Bloomberg, 30 March 2016)

Rajyasree Sen, Microsoft’s chatbot Tay is a mirror to Twitterverse (LiveMint, 31 March 2016)


Brief Reprise

Jon Russell, Microsoft AI bot Tay returns to Twitter, goes on spam tirade, then back to sleep (TechCrunch, 30 March 2016)



Updated 30 March 2016