Back in the last century, when I was a postgraduate student in the Department of Computing and Control at Imperial College, some members of the department were involved in building an interactive exhibit for the Science Museum next door.
As I recall, the exhibit was designed accept free text from members of the public, and would produce semi-intelligent responses, partly based on the users' input.
Anticipating that young visitors might wish to trick the software into repeating rude words, an obscenity filter was programmed into the software. When some of my fellow students managed to hack into the obscenity file, they were taken aback by the sheer quantity and obscurity of the vocabulary that the academic staff (including some innocent-looking female lecturers) were able to blacklist.
The chatbot recently launched onto Twitter and other social media platforms by Microsoft appears to be a more sophisticated version of that exhibit at the Science Museum so many years ago. But without the precautions.
Within 24 hours, following a series of highly offensive tweets, the chatbot (known as Tay) was taken down. Many of the offensive tweets have been deleted.
Before
Matt Burgess, Microsoft's new chatbot wants to hang out with millennials on Twitter (Wired, 23 March 2016)
Hugh Langley, We played 'Would You Rather' with Tay, Microsoft's AI chat bot (TechRadar, 23 March 2016)
Nick Summers, Microsoft's Tay is an AI chat bot with 'zero chill' (Engadget, 23 March 2016)
Just After
Peter Bright, Microsoft terminates its Tay AI chatbot after she turns into a Nazi (Ars Technica
Andrew Griffin, Tay Tweets: Microsoft AI chatbot designed to learn from Twitter ends up endorsing Trump and praising Hitler (Independent, 24 March 2016)
Alex Hern, Microsoft scrambles to limit PR damage over abusive AI bot Tay (Guardian, 24 March 2016)
Elle Hunt, Tay, Microsoft's AI chatbot, gets a crash course in racism from Twitter (Guardian, 24 March 2016)
Jane Wakefield, Microsoft chatbot is taught to swear on Twitter (BBC News, 24 March 2016)
"So Microsoft created a chat bot that so perfectly emulates a teenager
that it went off spouting offensive things just for the sake of getting
attention? I would say the engineers in Redmond succeeded beyond their wildest expectations, myself." (Ars Praetorian)
What a difference a day makes!
Some Time After
Peter Lee, Learning from Tay's Introduction (Official Microsoft Blog, 25 March 2016)
Sam Shead, Microsoft says it faces 'difficult' challenges in AI design after chat bot Tay turned into a genocidal racist (Business Insider, 26 March 2016)
Paul Mason, The racist hijacking of Microsoft’s chatbot shows how the internet teems with hate (Guardian, 29 March 2016)
Dina Bass, Clippy’s Back: The Future of Microsoft Is Chatbots (Bloomberg, 30 March 2016)
Rajyasree Sen, Microsoft’s chatbot Tay is a mirror to Twitterverse (LiveMint, 31 March 2016)
Brief Reprise
Jon Russell, Microsoft AI bot Tay returns to Twitter, goes on spam tirade, then back to sleep (TechCrunch, 30 March 2016)
Updated 30 March 2016
No comments:
Post a Comment