In a recent article on ChatGPT, Evgeny Morozov mentioned the psychoanalyst Ignacio Matte Blanco. Many psychoanalysts regard the unconscious as structured like a language, and Matte Blanco is known for developing a mathematical model of the unconscious based on infinite sets. Meanwhile, in a recent talk at CRASSH Cambridge, Marcus Tomalin described the inner workings of ChatGPT and similar large language models (LLM) as advanced matrix algebra. So what can Matte Blanco's model tell us about ChatGPT and the mathematics that underpins it?
Eric Rayner explains Matte Blanco's model as follows
The unconscious, since it can simultaneously contain great numbers of generalized ideas, notions, propositional functions or emotional conceptions is, as it were, a capacious dimensionalRayner p 93mathematician.
Between 2012 and 2018, Fionn Murtagh published several papers on the relationship between Matte Blanco's model and the mathematics underpinning data analytics. He notes one of the key elements of the model as the fact that "the unconscious does not know individuals but only classes or propositional functions which define the class".
Apart from Professor Murtagh's papers, I have not found any other references to Matte Blanco in this context. I have however found several papers that reference Lacan, including an interesting one by Luca Possati who argues that
the originality of AI lies in its profound connection with the human unconscious.
The ability of large language models to become disconnected from some conventional notion of reality is typically called hallucination. Naomi Klein objects to the anthropomorphic implications of this word, and her point is well taken given the political and cultural context in which it is generally used, but the word nonetheless seems appropriate if we are to follow a psychoanalytic line of inquiry.
Without the self having a containing framework of awareness of asymmetrical relations play breaks down into delusion.Rayner p 37
Perhaps the most exemplary situation of hallucination is where chatbots imagine facts about themselves. In his talk, Dr Tomalin reports a conversation he had with the chatbot BlenderBot 3. He tells it that his dog had just died; BlenderBot 3 replies that it has two dogs, named Baxter and Maxwell. No doubt a human psychopath might consciously lie about such matters in order to fake empathy, but even if we regard the chatbot as a stochastic psychopath (as Tomalin suggests) it is not clear that the chatbot is consciously lying. If androids can dream of electric sheep, why can't chatbots dream of dog ownership?
Or to put it another way, and using Matte Blanco's bi-logic, if the unconscious is structured like a language, symmetry demands that language is structured like the unconscious.
Naomi Klein, AI machines aren’t ‘hallucinating’. But their makers are
(Guardian, 8 May 2023)
Evgeny Morozov, The problem with artificial intelligence? It’s neither artificial nor intelligent (Guardian, 30 March 2023)
Fionn Murtagh, Ultrametric Model of Mind, I: Review (2012) https://arxiv.org/abs/1201.2711
Fionn Murtagh, Ultrametric Model of Mind, II: Review (2012) https://arxiv.org/abs/1201.2719
Fionn Murtagh, Mathematical
Representations of Matte Blanco’s Bi-Logic, based on Metric Space and
Ultrametric or Hierarchical Topology: Towards Practical Application (Language and Psychoanalysis, 2014, 3 (2), 40-63)
Luca Possati, Algorithmic unconscious: why psychoanalysis helps in understanding AI (Palgrave Communications, 2020)
Eric Rayner , Unconscious Logic: An introduction to Matte Blanco's Bi-Logic and its uses (London: Routledge, 1995)
Marcus Tomalin, Artificial Intelligence: Can Systems Like ChatGPT Automate Empathy (CRASSH Cambridge, 31 March 2023)
Stephen Wolfram, What is ChatGPT doing … and why does it work? (14 February 2023)