Sunday, May 14, 2023

From ChatGPT to Infinite Sets

In a recent article on ChatGPT, Evgeny Morozov mentioned the psychoanalyst Ignacio Matte Blanco. Many psychoanalysts regard the unconscious as structured like a language, and Matte Blanco is known for developing a mathematical model of the unconscious based on infinite sets. Meanwhile, in a recent talk at CRASSH Cambridge, Marcus Tomalin described the inner workings of ChatGPT and similar large language models (LLM) as advanced matrix algebra. So what can Matte Blanco's model tell us about ChatGPT and the mathematics that underpins it?

Eric Rayner explains Matte Blanco's model as follows

The unconscious, since it can simultaneously contain great numbers of generalized ideas, notions, propositional functions or emotional conceptions is, as it were, a capacious dimensional mathematician. Rayner p 93

Between 2012 and 2018, Fionn Murtagh published several papers on the relationship between Matte Blanco's model and the mathematics underpinning data analytics. He notes one of the key elements of the model as the fact that "the unconscious does not know individuals but only classes or propositional functions which define the class". 

Apart from Professor Murtagh's papers, I have not found any other references to Matte Blanco in this context. I have however found several papers that reference Lacan, including an interesting one by Luca Possati who argues that the originality of AI lies in its profound connection with the human unconscious.

The ability of large language models to become disconnected from some conventional notion of reality is typically called hallucination. Naomi Klein objects to the anthropomorphic implications of this word, and her point is well taken given the political and cultural context in which it is generally used, but the word nonetheless seems appropriate if we are to follow a psychoanalytic line of inquiry.

Without the self having a containing framework of awareness of asymmetrical relations play breaks down into delusion. Rayner p 37

Perhaps the most exemplary situation of hallucination is where chatbots imagine facts about themselves. In his talk, Dr Tomalin reports a conversation he had with the chatbot BlenderBot 3. He tells it that his dog had just died; BlenderBot 3 replies that it has two dogs, named Baxter and Maxwell. No doubt a human psychopath might consciously lie about such matters in order to fake empathy, but even if we regard the chatbot as a stochastic psychopath (as Tomalin suggests) it is not clear that the chatbot is consciously lying. If androids can dream of electric sheep, why can't chatbots dream of dog ownership?

Or to put it another way, and using Matte Blanco's bi-logic, if the unconscious is structured like a language, symmetry demands that language is structured like the unconscious.



Naomi Klein, AI machines aren’t ‘hallucinating’. But their makers are (Guardian, 8 May 2023)

Evgeny Morozov, The problem with artificial intelligence? It’s neither artificial nor intelligent (Guardian, 30 March 2023)

Fionn Murtagh, Ultrametric Model of Mind, I: Review (2012) https://arxiv.org/abs/1201.2711

Fionn Murtagh, Ultrametric Model of Mind, II: Review (2012) https://arxiv.org/abs/1201.2719

Fionn Murtagh, Mathematical Representations of Matte Blanco’s Bi-Logic, based on Metric Space and Ultrametric or Hierarchical Topology: Towards Practical Application (Language and Psychoanalysis, 2014, 3 (2), 40-63) 

Luca Possati, Algorithmic unconscious: why psychoanalysis helps in understanding AI (Palgrave Communications, 2020)

Eric Rayner , Unconscious Logic: An introduction to Matte Blanco's Bi-Logic and its uses (London: Routledge, 1995)

Marcus Tomalin, Artificial Intelligence: Can Systems Like ChatGPT Automate Empathy (CRASSH Cambridge, 31 March 2023)

Stephen Wolfram, What is ChatGPT doing … and why does it work? (14 February 2023)


Related post: Chatbiotics: Coercion of the Senses (April 2023), The Mad Hatter Works Out (July 2023)

Tuesday, April 11, 2023

Chatbotics - Coercion of the Senses

In a recent talk at CRASSH Cambridge, Marcus Tomalin described the inner workings of ChatGPT and similar large language models (LLM) as advanced matrix algebra, and asked whether we could really regard these systems as manifesting empathy. A controversial 2021 paper (which among other things resulted in Timnit Gebru's departure from Google) characterized large language models as stochastic parrots. Tomalin suggested we could also regard them as stochastic psychopaths, given the ability of (human) psychopaths to manipulate people. While psychopaths are generally thought to lack the kind of affective empathy that other humans possess, they are sometimes described as possessing cold empathy or dark empathy, which enables them to control other people's emotions.

If we want to consider whether an algorithm can display empathy, we could ask the same question about other constructed entities including organizations. Let's start with so-called empathetic marketing. Tomalin's example was the L'Oreal slogan because you're worth it.

If some instances of marketing are described in terms of "empathy", where is the empathy supposed to be located? In the case of the L'Oreal slogan, relevant affect may be situated not just in the consumer but also individuals working for the company. The copywriter who created the slogan in 1971 was Ilon Specht. Many years later she told Malcolm Gladwell, It was very personal. I can recite to you the whole commercial, because I was so angry when I wrote it. Gladwell quoted a friend of hers as saying Ilon had a degree of neurosis that made her very interesting

And then there is Joanne Dusseau, the model who first spoke the words.

“I took the tag line seriously,” she says. “I felt it all those thousands of times I said it. I never took it for granted. Over time, it changed me for the better.” (Vogue)

So if this is what it takes to produce and sustain one of the most effective and long-lasting marketing messages, what affective forces can large language models assemble? Or to put it another way, how might empathy emerge?

Another area where algorithmic empathy needs careful consideration is in mental health. There are many apps that claim to provide help to people with mental health issues. If these apps appear to display any kind of empathy with the user, this might increase the willingness of the user to accept any guidance or nudge. (In a psychotherapeutic context, this could be framed in terms of transference, with the algorithm playing the role of the "subject supposed to know".) Over the longer term, it might result in over-reliance or dependency.

One of the earliest recorded examples of a person confiding in a pseudo-therapeutic machine was when Joseph Weizenbaum's secretary was caught talking to ELIZA. Katherine Hayles offers an interesting interpretation of this incident, suggesting that ELIZA might have seemed to provide the dispassionate and non-judgemental persona that human therapists take years of training to develop.

I did some work a few years ago on technology ethics in relation to nudging. This was largely focused on the actions that the nudge might encourage. I need to go back and look at this topic in terms of empathy and affect. Watch this space.

 


Emily Bender et al, On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? (FAccT '21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, March 2021 Pages 610–623)

Malcolm Gladwell, True Colors: Hair dye and the hidden history of postwar America (New Yorker, 22 March 1999)

N Katherine Hayles, Trauma of Code (Critical Inquiry, Vol. 33, No. 1, Autumn 2006, pp. 136-157)

Naomi Pike, As L’Oréal Paris’ Famed Tagline “Because You’re Worth It” Turns 50, The Message Proves As Poignant As Ever (Vogue, 8 March 2021)

Marcus Tomalin, Artificial Intelligence: Can Systems Like ChatGPT Automate Empathy (CRASSH Cambridge, 31 March 2023) 

Related posts: Towards Chatbot Ethcs (May 2019), The Sad Reality of Chatbotics (December 2021), From ChatGPT to infinite sets (May 2023)

Sunday, April 10, 2022

Lie Detectors at Airports

@jamesvgingerich reports that the EU is putting lie detector robots on its borders. @Abebab is horrified.

 

There are several things worth noting here.

Firstly, lie detectors work by detecting involuntary actions (eye movements, heart rate) that are thought to be a proxy for mendacity. But there are often alternative explanations for these actions, and so the interpretation of these is highly problematic. See my post on Memory and the Law (June 2008)

Secondly, there is a lot of expensive and time-wasting technology installed at airports already, which has dubious value in detecting genuine threats, but may help to make people feel safer. Bruce Schneier calls this Security Theatre. See my posts on the False Sense of Security (June 2019) and Identity, Privacy and Security at Heathrow Terminal Five (March 2008).

What is more important is to consider the consequences of adding this component (whether reliable or otherwise) to the larger system. In my post Listening for Trouble (June 2019), I discussed the use of Aggression Detection microphones in US schools, following an independent study that was carried out with active collaboration from the supplier of the equipment. Obviously this kind of evaluation requires some degree of transparency.

Most important of all is the ethical question. Is this technology biased against certain categories of subject, and what are the real-world consequences of being falsely identified by this system? Is having a human in the loop sufficient protection against the dangers of algorithmic profiling? See Algorithmic Bias (March 2021).

Given the inaccuracy of detection, there may be a significant rate of false positives and false negatives. False positives affect the individual concerned, suffering consequences ranging from inconvenience and delay to much worse. False negatives mean that a person has got away with an undetected lie, so this has consequences for society as a whole. 

How much you think this matters depends on what you think they might be lying about, and how important this is. For example, it may be quicker to say you packed your suitcase yourself and it hasn't been out of your sight, even if this is not strictly true, because any other answer may trigger loads of other time-wasting questions. However, other lies may be more dangerous ...

 


For more details on the background of this initiative, see

Matthias Monroy, EU project iBorderCtrl: Is the lie detector coming or not? (26 April 2021)

Thursday, December 23, 2021

The Sad Reality of Chatbotics

As I noted in my previous post on chatbotics Towards Chatbot Ethics (May 2019), the chatbot has sometimes been pitched as a kind of Holy Grail. Which prompts the question I discussed before - whom shall the chatbot serve?

Chatbots are designed to serve their master - and this is generally the organization that runs them, not necessarily the consumer, even if you have paid good money to have one of these curious cylinders in your home. For example, Amazon's Alexa is supposed to encourage consumers to access other Amazon services, including retail and entertainment -  and this is how Amazon expects to make a financial return on the sale and distribution of these devices.

But how well do they work even for them? The journalist Priya Anand (who tweets at @PriyasIdeas) has been following this question for a while. Back in 2018, she talked to digital marketing experts who warned that voice shopping was unlikely to take off quickly. Her latest article notes the attempts by Amazon Alexa to nudge consumers into shopping, which may simply cause some frustrated consumers to switch the thing off altogether. Does this explain the apparently high attrition rates?

If you are selling a device at a profit, it may not matter if people don't use it much. But if you are selling a device at an initial loss, expecting to recoup the money when the device is used, then you have to find ways of getting people to use the thing. 

Perhaps if Amazon can use its Machine Learning chops to guess what we want before we've even said anything, then the chatbots can cut out some of the annoying chatter. Apparently Alexa engineers think this would be more natural. Others might argue Natural's Not In It. (Coercion of the senses? We're not so gullible.)



Priya Anand, The Reality Behind Voice Shopping Hype (The Information, 6 August 2018)

Priya Anand, Amazon’s Alexa Stalled With Users as Interest Faded, Documents Show (Bloomberg, 22 December 2021)

Daphne Leprince-Ringuet, Alexa can now guess what you want before you even ask for it (ZDNet, 13 November 2020)

Tom McKay, Report: Almost Nobody Is Using Amazon's Alexa to Actually Buy Stuff (Gizmodo, 6 August 2018)

Chris Matyszczyk, Amazon wants you to keep quiet, for a brilliantly sinister reason (ZDNet, 4 November 2021)

Related posts: Towards Chatbot Ethics (May 2019), Technology and the Discreet Cough (September 2019), Chatbiotics: Coercion of the Senses (April 2023)

Tuesday, September 28, 2021

Automation and the Red Queen Effect

Product vendors and technology advisory firms often talk about accelerating automation. A popular way of presenting advice is in the form of an executive survey - look, all your peers are thinking about this, so you'd better spend some money with us too. Thus the Guardian reports a survey carried out by one of the large consulting firms, which concluded that almost half of company bosses in 45 countries are speeding up plans to automate their businesses. In the early days of the COVID-19 pandemic, many saw this as a great opportunity to push automation further, although more recent commentators have been more sceptical.

Politicians have also bought into this narrative. For example, Barack Obama's farewell presidential address referred to the relentless pace of automation.

For technical change more generally, there is a common belief that things are constantly getting faster. In my previous posts on what is sometimes known as the Red Queen Effect, I have expressed the view that perceptions of technological change often seem to be distorted by proximity and a subjective notion of significance - certain types of recent innovation being regarded as more important and exciting than other or older innovations.

Aaron Benanav takes a similar view.

Our collective sense that the pace of labor-saving technological change is accelerating is an illusion. It’s like the feeling you get when looking out of the window of a train car as it slows down at a station: passing cars on the other side of the tracks appear to speed up. Labor-saving technical change appears to be happening at a faster pace than before only when viewed from across the tracks – that is, from the standpoint of our ever more slow-growing economies. Benanav 2020

Benanav also notes that the automation narrative has been around since the days of Karl Marx.

Visions of automated factories then appeared again in the 1930s, 1950s and 1980s, before their re-emergence in the 2010s. Benanav 2019
Meanwhile, Judy Wajcman argues (referencing Lucy Suchman) that the automation narrative typically relies on overlooking the human labour that is required to keep the computers and robots working efficiently - especially the low-paid work of data coders and content checkers. Further evidence of this has recently been published by Phil Jones.

 


Bosses speed up automation as virus keeps workers home (Guardian, 30 March 2020)

Is the pandemic accelerating automation? Don’t be so sure (Economist, 19 June 2021) subscription required

Aaron Benanav, Automation and the Future of Work. Part One (New Left Review 119 Sept/Oct 2019) Part Two (New Left Review 120, Nov/Dec 2019)

Aaron Benanav, Automation isn't wiping out jobs. It's that our engine of growth is winding down (Guardian, 23 January 2020)

Phil Jones, Work without the worker - Labour in the age of platform capitalism (Verso, 2021) Extract published as Refugees help power machine learning advances at Microsoft, Facebook, and Amazon (Rest of World, 22 September 2021)

Toby McClean, Automation Is Accelerating At The Edge To Improve Workplace Safety, Productivity (Forbes, 5 January 2021) 

Judy Wajcman, Automation: is it really different this time? (British Journal of Sociology, 2017)

Chris Wiltz, Grocery Automation Is Accelerating Thanks to the Coronavirus (Grocery News, 16 April 2020)

Tuesday, April 27, 2021

The Allure of the Smart City

The concept of smart city seems to encompass a broad range of sociotechnical initiatives, including community-based healthcare, digital electricity, housing affordability and sustainability, next-generation infrastructure, noise pollution, quality of air and water, robotic furniture, transport and mobility, and urban planning. The smart city is not a technology as such, more like an assemblage of technologies.

Within this mix, there is often a sincere attempt to address some serious social and environmental concerns, such as reducing the city's carbon footprint. However, Professor Rob Kitchen notes a tendency towards greenwashing or even ethics washing.

Kitchen also raises concerns about civic paternalism - city authorities and their tech partners knowing what's best for the citizenry.

On the other hand, John Koetsier makes the point that If-We-Don't-Do-It-The-Chinese-Will. This point was also recently made by Jeremy Fleming in his 2021 Vincent Briscoe lecture. (See my post on the Invisibility of Infrastructure.)

Meanwhile, here is a small and possibly unrepresentative sample of Smart City initiatives in the West that have reached the press recently.

  • Madrid with IBM
  • Portland with Google Sidewalk - cancelled Feb 2021
  • San Jose with Intel - pilot programme
  • Toronto with Google Sidewalk (Quayside) - cancelled May 2020

 


 

Daniel Doctoroff, Why we’re no longer pursuing the Quayside project — and what’s next for Sidewalk Labs (Sidewalk Talk, 7 May 2020)

Rob Kitchen, The Ethics of Smart Cities (RTE 27 April 2019)

John Koetsier, 9 Things We Lost When Google Canceled Its Smart Cities Project In Toronto (Forbes, 13 May 2020) 

Ryan Mark and Gregory Anya, Ethics of Using Smart City AI and Big Data: The Case of Four Large European Cities (Orbit, Vol 2/2, 2019)

Juan Pedro Tomás, Smart city case study: San Jose, California (Enterprise IOT Insights, 5 October 2017)

Jane Wakefield, The Google city that has angered Toronto (BBC News, 18 May 2019), Google-linked smart city plan ditched in Portland (BBC News, 23 February 2021)

 

See also IOT is coming to town (December 2017), On the invisibility of infrastructure (April 2021)

 

Wednesday, February 03, 2021

Andy Jassy

Most people still think of Amazon primarily as an online retailer, but the elevation of Andy Jassy to take over from Jeff Bezos as CEO provides further evidence for the strategic importance of Amazon Web Services (AWS) within the Amazon group.

AWS was launched in 2002 and relaunched in 2006. In March 2008, Om Malik published an interview with Ray Ozzie, then the Chief Software Architect at Microsoft, which included some positive comments about AWS. By the end of the year, both Google and Microsoft had announced rival cloud computing offerings. As far as I can see, cloud computing first appeared as an Emerging Technology on the Gartner Hype Curve (it's not a cycle) in 2008, reaching the Peak of Inflated Expectations by 2009.

During that period, I was a software industry analyst, calling out Jeff Bezos and Ray Ozzie as two of the most visionary players in the industry. My colleague Lawrence Wilkes wrote a long report on AWS in 2004. (But the hype around cloud computing took off later, and the broader awareness of AWS is comparatively recent, so I'm not convinced that the classic hype curve applies to this topic.)

Alongside the news of Jassy's elevation, today's tech press also reports that Google Cloud is still making massive losses. So much for the Slope of Enlightenment then.



 

Jasper Jolly, Bezos leaves Amazon in its prime – keeping it that way is the task (The Guardian, 3 February 2021)

Kieren McCarthy, So Jeff Bezos is stepping back from Amazon to play with his space rockets. Who's this Andy Jassy chap? (The Register, 3 February 2021)

Om Malik, GigaOM Interview: Ray Ozzie (GigaOM, 10 March 2008)

Ron Miller, What Andy Jassy’s promotion to Amazon CEO could mean for AWS (TechCrunch, 2 February 2021)

Simon Sharwood, Google's cloud services lost $14.6bn over three years – and CEO Sundar Pichai likes that trajectory (The Register, 3 February 2021)

Lawrence Wilkes, Amazon and eBay Web Services - The new enterprise applications? (CBDI Journal, October 2004) 


Related posts: Jeff Bezos and Ecosystem Thinking (February 2004), Amazon and eBay (August 2004), Internet Service Disruption (November 2005), Ray Ozzie (March 2008), Utility Computing and Profitability (March 2008)

Also Technology Hype Curve (September 2005)