Showing posts with label artificial intelligence. Show all posts
Showing posts with label artificial intelligence. Show all posts

Monday, July 07, 2025

From Anxiety to ChatGPT

The following story presents a nice reversal of the previous post in my chatbotics series, From ChatGPT to Anxiety.

A boss at Xbox (owned by Microsoft) went onto LinkyDin (owned by Microsoft) to recommend the use of chatbots (either Microsoft Copilot or OpenAI's ChatGPT) for any employee made redundant (and consequently anxious) as Microsoft switches its focus towards artificial intelligence. He suggested that chatbots could help reduce the emotional and cognitive load that comes with job loss.

Brandon Sheffield posted a screenshot of the LinkyDin post on BlueSky, commenting Something I've realized over time is people in general lack the ability to think in a broader scope and include context and eventualities. But after thousands of people get laid off from your company maybe don't suggest they turn to the thing you're trying to replace them with for solace.

It may well be the case that chatbots are capable of higher levels of emotional intelligence than some tech bosses, and many people might prefer to confide in a chatbot than a representative of the company that has just fired them, whether for practical advice or mental health support. As Bilquise et al report, the emotional intelligence of chatbots is generally assessed in terms of accurately detecting the user’s emotion and generating emotionally relevant responses, while Ruse et al have explored how the use of mental health apps has changed the way people think about mental health more generally. (See also my commentary on the Ruse article.)


Ghazala Bilquise, Samar Ibrahim and Khaled Shaalan, Emotionally Intelligent Chatbots: A Systematic Literature Review (Human Behavior and Emerging Technologies 2022)

Charlotte Edwards, Xbox producer tells staff to use AI to ease job loss pain (BBC 7 July 2025)

Lili Jamali, Microsoft to cut up to 9,000 more jobs as it invests in AI (BBC 3 July 2025)

Luke Plunkett, Xbox Producer Recommends Laid Off Workers Should Use AI To ‘Help Reduce The Emotional And Cognitive Load That Comes With Job Loss’ (Aftermath, 4 July 2025) HT Brandon Sheffield

Jesse Ruse, Ernst Schraube, and Pual Rhodes, Left to their own devices: The significance of mental health apps on the construction of therapy and care (Subjectivity 2024)

Richard Veryard, On the Subjectivity of Devices (Subjectivity 2024)

Monday, April 15, 2024

From ChatGPT to Entropy

Large language models (LLM) are trained on large quantities of content. And increasing amounts of available content is generated by large language models. This sets up a form of recursion in which AI models increasingly rely on such content, producing an irreversible degradation in the quality of AI-generated content. This has been described as a form of entropy. Shumailov et al call it Model Collapse.

There is an interesting comparison between this and data poisoning, where an AI model is deliberately polluted with bad data, often as an external attack, to influence and corrupt its output. Whereas model collapse doesn't involve a hostile attack, and may reflect a form of self-pollution. 

Is there a technical or sociotechnical fix for this? This seems to require limiting the training data - either sticking to the original data source, or only allowing new training data that can be verified as non LLM-generated. Shumailov et al appeal to some form of "community-wide coordination ... to resolve questions of provenance", but this seems somewhat optimistic.

Dividing content by provenance is of course a non-trivial challenge, and automatic filters typically flag content from non-native speakers as AI-generated, which in turn further narrows the data available. Thus Shumailov et al conclude "it may become increasingly difficult to train newer versions of LLMs without access to data that was crawled from the Internet prior to the mass adoption of the technology, or direct access to data generated by humans at scale".

What are the implications of this for the attainment of the promised benefits of AI? Imre Lakatos once suggested a distinction between progressive research programmes and degenerating ones: a degenerating programme either fails to make interesting (novel) predictions, or becomes increasingly unable to make true predictions. Many years ago, Hubert Dreyfus made exactly this criticism of AI. And to the extent that Large Language Models and other forms of AI are vulnerable to model collapse and entropy, this would again make AI look like a degenerating programme.

 


Thomas Claburn, What is Model Collapse and how to avoid it (The Register, 26 January 2024)

Ian Sample, Programs to detect AI discriminate against non-native English speakers, shows study (Guardian, 10 Jul 2023)

Ilia Shumailov, Zakhar Shumaylov, Yiren Zhao, Yarin Gal, Nicolas Papernot and Ross Anderson, The Curse of Recursion: Training on Generated Data Makes Models Forget (arXiv:2305.17493v2, 31 May 2023)

David Sweenor, AI Entropy: The Vicious Circle of AI-Generated Content (Linked-In, 28 August 2023)

Stanford Encyclopedia of Philosophy: Imre Lakatos

Wikipedia: Data Poisoning, Model Collapse, Self Pollution

Related posts: From ChatGPT to Infinite Sets (May 2023), ChatGPT and the Defecating Duck (Sept 2023), Creativity and Recursivity (Sept 2023)

Thursday, March 24, 2016

Artificial Belligerence

Back in the last century, when I was a postgraduate student in the Department of Computing and Control at Imperial College, some members of the department were involved in building an interactive exhibit for the Science Museum next door.

As I recall, the exhibit was designed accept free text from members of the public, and would produce semi-intelligent responses, partly based on the users' input.

Anticipating that young visitors might wish to trick the software into repeating rude words, an obscenity filter was programmed into the software. When some of my fellow students managed to hack into the obscenity file, they were taken aback by the sheer quantity and obscurity of the vocabulary that the academic staff (including some innocent-looking female lecturers) were able to blacklist.

The chatbot recently launched onto Twitter and other social media platforms by Microsoft appears to be a more sophisticated version of that exhibit at the Science Museum so many years ago. But without the precautions.

Within 24 hours, following a series of highly offensive tweets, the chatbot (known as Tay) was taken down. Many of the offensive tweets have been deleted.


Before

Matt Burgess, Microsoft's new chatbot wants to hang out with millennials on Twitter (Wired, 23 March 2016)

Hugh Langley, We played 'Would You Rather' with Tay, Microsoft's AI chat bot (TechRadar, 23 March 2016)

Nick Summers, Microsoft's Tay is an AI chat bot with 'zero chill' (Engadget, 23 March 2016)


Just After

Peter Bright, Microsoft terminates its Tay AI chatbot after she turns into a Nazi (Ars Technica

Andrew Griffin, Tay Tweets: Microsoft AI chatbot designed to learn from Twitter ends up endorsing Trump and praising Hitler (Independent, 24 March 2016)

Alex Hern, Microsoft scrambles to limit PR damage over abusive AI bot Tay (Guardian, 24 March 2016)

Elle Hunt, Tay, Microsoft's AI chatbot, gets a crash course in racism from Twitter (Guardian, 24 March 2016)

Jane Wakefield, Microsoft chatbot is taught to swear on Twitter (BBC News, 24 March 2016)


"So Microsoft created a chat bot that so perfectly emulates a teenager that it went off spouting offensive things just for the sake of getting attention? I would say the engineers in Redmond succeeded beyond their wildest expectations, myself." (Ars Praetorian)


What a difference a day makes!


Some Time After

Peter Lee, Learning from Tay's Introduction (Official Microsoft Blog, 25 March 2016)

Sam Shead, Microsoft says it faces 'difficult' challenges in AI design after chat bot Tay turned into a genocidal racist (Business Insider, 26 March 2016)

Paul Mason, The racist hijacking of Microsoft’s chatbot shows how the internet teems with hate (Guardian, 29 March 2016)

Dina Bass, Clippy’s Back: The Future of Microsoft Is Chatbots (Bloomberg, 30 March 2016)

Rajyasree Sen, Microsoft’s chatbot Tay is a mirror to Twitterverse (LiveMint, 31 March 2016)


Brief Reprise

Jon Russell, Microsoft AI bot Tay returns to Twitter, goes on spam tirade, then back to sleep (TechCrunch, 30 March 2016)



Updated 30 March 2016

Sunday, November 08, 2015

How Soon Might Humans Be Replaced At Work?

#CIPAai An interesting debate on Artificial Intelligence took place at the Science Museum this week, sponsored by the Chartered Institute of Patent Agents. When will humans be replaced by computers in any given job?

As this was the professional body for Patent Agents, they decided to pick an example close to their hearts. The specific motion being debated was that a patent would be filed and granted without human intervention within the next 25 years. The motion was passed roughly 80-60.

At first sight, this debate appeared to be an exercise in technological forecasting. When would AI be capable of creating new inventions and correctly drafting the patent application? And when would AI be capable of evaluating a patent application, carrying out the necessary searches, and granting a patent. Is this the kind of thing we should expect when the much vaunted Singularity (predicted from around 2040 onwards) occurs?

Speaking for the motion, Calum Chase and Chrissie Lightfoot were enthusiastic about the technological opportunities of AI. They pointed out the incredible feats that were already achieved as a result of machine learning, including some surprisingly creative solutions to technical problems.

Speaking against the motion, Nigel Hanley and Ilya Kazi acknowledged the great contribution of computer intelligence to support the patent agent and patent examiner, but were sceptical that anyone would trust a computer with such an important task as filing and granting patents. Nigel Hanley pointed out the limitations of internet search, which is of course designed to find things that other people have already found. (As A.A. Milne put it, Thinking With The Majority.)

The motion only required that a single patent be filed and granted without human intervention. It didn't need to be a particularly complicated one. But even to grant a single patent without human intervention would require a change in the law, presumably agreed internationally. (As it happens, my late father Kenneth Veryard was involved in the development of European Patent Law around 25 years ago, so I am aware of the time and painstaking effort required to achieve such international agreements.)

But this reframes the debate: from a technological one about the future capability of computers, to a sociopolitical one about the possibility of institutional change. Even if some algorithm were good enough to compete with humans, at least for some routine patent matters, the question is whether politicians would be willing to entrust these matters to an algorithm.

There are also strange questions of ownership and rights. Examples of computer intelligence always seem to come back to the usual suspects - Google, IBM Watson, and their ilk. If the creativity comes from the large computer networks run by these companies, then the patents will belong to these corporations. When Thomas Watson said, "I think there is a world market for maybe five computers", he wasn't talking about billions of laptops or trillions of internet-enabled things, but the very much smaller number of major computer networks capable of controlling everything else.

Can we realistically expect AI to take over one small area of patent law without taking over the much larger challenge of cleaning up legislation? After all, a genuine superintelligence might well come up with a much better basis for promoting innovation and protecting the interests of inventors than a few ancient principles of patent law.

But perhaps here's the killer argument. As the volume of patent applications increases, the cost of processing them all by hand becomes prohibitive. So governments could be tempted by the cost-savings offered by a clever algorithm. Even though governments have a very bad track record at realising cost savings from IT projects, politicians can often be persuaded to think it will be different this time.

So even if AI patent activity turns out not to be as good as when humans do it, and even if it subsequently results in a lot of seriously expensive litigation, it could seem a lot cheaper in the short-term.


References


http://www.cipadebate.org.uk/

Steven Johnson, Superintelligence Now (How We Get To Next, 28 October 2015)

James Nurton, Could a computer do your job (Managing IP, 3 November 2015)

Wikipedia: Technological Singularity


Related Posts

The End of Google (June 2006), What does a patent say? (February 2023)


Update 2016

For the potential ramifications of robotic legal assistants, see Remus, Dana and Levy, Frank S., Can Robots Be Lawyers? Computers, Lawyers, and the Practice of Law (December 30, 2015). Available at SSRN: http://ssrn.com/abstract=2701092 or http://dx.doi.org/10.2139/ssrn.2701092. Reported by Aviva Rutkin, Artificial intelligence could make lawyers more risk averse (New Scientist 27 January 2016).

See also Ryan Abbott, I Think, Therefore I Invent: Creative Computers and the Future of Patent Law (Boston College Law Review, Vol 57 Issue 4, September 2016). Reported in Iain Thompson, AI software should be able to register its own patents, law prof argues (The Register, 17 October 2016)

Update 2021

Tom Knowles, Patently brilliant ... AI listed as inventor for first time (The Times, 28 July 2021)

Dagmar Monett tweeted Can an #AI invent something? No, it can't. 

David Gunkel replied I understand the issue here, but the question before the court in "Thaler v Commissioner of Patents [2021] FCA 879" was not "Can an #AI invent something?" The question decided by the court was "Can an #AI (DABUS) be named "inventor" on a patent application?" Different questions.

Update 2023

Further news on the DABUS case
AI cannot patent inventions, UK Supreme Court confirms (BBC News 21 December 2023) 

 

updated 18 October 2021, link added 21 Feb 2023, updated 22 Dec 2023