Saturday, May 11, 2019

Whom does the technology serve?

When regular hyperbole isn't sufficient, writers often refer to new technologies as The Holy Grail of something or other. As I pointed out in my post on Chatbot Ethics, this has some important ethical implications.

Because in the mediaeval Parsifal legend, at a key moment in the story, our hero fails to ask the critical question: Whom Does the Grail Serve? And when technologists enthuse about the latest inventions, they typically overlook the same question: Whom Does the Technology Serve?

In a new article on driverless cars, Dr Ashley Nunes of MIT, argues that academics have allowed themselves to be distracted by versions of the Trolley Problem (Whom Shall the Vehicle Kill?), and neglected some much more important ethical questions.

For one thing, Nunes argues that the so-called autonomous vehicles are never going to be fully autonomous. There will always be ways of controlling cars remotely, so the idea of a lone robot struggling with some ethical dilemma is just philosophical science fiction. Last year, he told Jesse Dunietz that he hasn't yet found a safety-critical transport system without real-time human oversight.

And in any case, road safety is never about one car at a time, it is about deconfliction - which means cars avoiding each other as well as pedestrians. With human driving, there are multiple deconfliction mechanisms to allow many vehicles to occupy the same space without hitting each other. These include traffic signals, road markings and other conventions indicating right of way, signals (including honking and flashing lights) to negotiate between drivers, or for drivers to show that they are willing to wait for a pedestrian to cross the road in front of them. Equivalent mechanisms will be required to enable so-called autonomous vehicles to provide a degree of transparency of intention, and therefore trust. (See Matthews et al. See also Applin and Matthews). See my post on the Ethics of Interoperability.


But according to Nunes, "the most important question that we should be asking about this technology" is "Who stands to gain from its life-saving potential?" Because "if those who most need it don’t have access, whose lives would we actually be saving?"

In other words, Whom Does The Grail Serve?




Sally Applin and Michael Fischer, Applied Agency: Resolving Multiplexed Communication in Automobiles (Adjunct Proceedings of the 4th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI '12), October 17–19, 2012, Portsmouth, NH, USA) HT @AnthroPunk

Jesse Dunietz, Despite Advances in Self-Driving Technology, Full Automation Remains Elusive (Undark, 22 November 2018) HT @SafeSelfDrive

Ashley Nunes, Driverless cars: researchers have made a wrong turn (Nature Briefing, 8 May 2019) HT @vdignum @HumanDriving

Milecia Matthews, Girish Chowdhary and Emily Kieson, Intent Communication between Autonomous Vehicles and Pedestrians (2017)

Wikipedia: Trolley Problem


Related posts 
Towards Chatbot Ethics (May 2019), Ethics of Interoperability (May 2019).

Sunday, May 05, 2019

Towards Chatbot Ethics

When over-enthusiastic articles describe chatbotics as the Holy Grail (for digital marketing or online retail or whatever), I should normally ignore this as the usual hyperbole. But in this case, I'm going to take it literally. Let me explain.

As followers of the Parsifal legend will know, at a critical point in the story Parsifal fails to ask the one question that matters: "Whom does the Grail serve?"

And anyone who wishes to hype chatbots as some kind of "holy grail" must also ask the same question: "Whom does the Chatbot serve?" IBM puts this at the top of its list of ethical questions for chatbots, as does @ashevat (formerly with Slack).

To the extent that a chatbot is providing information and advice, it is subject to many of the same ethical considerations as any other information source - is the information complete, truthful and unbiased, or does it serve the information provider's commercial interest? Perhaps the chatbot (or rather its owner) is getting a commission if you eat at the recommended restaurant, just as hotel concierges have always done. A restaurant review in an online or traditional newspaper may appear to be independent, but restaurants have many ways of rewarding favourable reviews even without cash changing hands. You might think it is ethical for this to be transparent.

But an important difference between a chatbot and a newspaper article is that the chatbot has a greater ability to respond to the particular concerns and vulnerabilities of the user. Shiva Bhaska discusses how this power can be used for manipulation and even intimidation. And making sure the user knows that they are talking to a bot rather than a human does not guard against an emotional reaction: Joseph Weizenbaum was one of the first in the modern era to recognize this.

One area where particularly careful ethical scrutiny is required is the use of chatbots for mental health support. Obviously there are concerns about efficacy and safety as well as privacy, and such systems need to undergo clinical trials for efficacy and potential adverse outcomes, just like any other medical intervention. Kira Kretzschmar et al argue that it is also essential that these platforms are specifically programmed to discourage over-reliance, and that users are encouraged to seek human support in the case of an emergency.


Another ethical problem with chatbots is related to the Weasley doctrine (named after Arthur Weasley in Harry Potter and the Chamber of Secrets):
"Never trust anything that can think for itself if you can't see where it keeps its brain."
Many people have installed these curious cylindrical devices in their homes, but is that where the intelligence is actually located? When a private conversation was accidentally transmitted from Portland to Seattle, engineers at Amazon were able to inspect the logs, coming up with a somewhat implausible explanation as to how this might have occurred. Obviously this implies a lack of boundaries between the device and the manufacturer. And as @geoffreyfowler reports, chatbots don't only send recordings of your voice back to Master Control, they also send status reports from all your other connected devices.

Smart home, huh? Smart for whom? Transparency for whom? Or to put it another way, whom does the chatbot serve?





Shiva Bhaskar, The Chatbots That Will Manipulate Us (30 June 2017)

Geoffrey A. Fowler, Alexa has been eavesdropping on you this whole time (Washington Post, 6 May 2019) HT@hypervisible

Sidney Fussell, Behind Every Robot Is a Human (The Atlantic, 15 April 2019)

Gary Horcher, Woman says her Amazon device recorded private conversation, sent it out to random contact (25 May 2018)

Kira Kretzschmar et al, Can Your Phone Be Your Therapist? Young People’s Ethical Perspectives on the Use of Fully Automated Conversational Agents (Chatbots) in Mental Health Support (Biomed Inform Insights, 11, 5 March 2019)

Trips Reddy, The code of ethics for AI and chatbots that every brand should follow (IBM 15 October 15, 2017)

Amir Shevat, Hard questions about bot ethics (Slack Platform Blog, 12 October 2016)

Tom Warren, Amazon explains how Alexa recorded a private conversation and sent it to another user (The Verge, 24 May 2018)

Joseph Weizenbaum, Computer Power and Human Reason (WH Freeman, 1976)


Related post: Whom does the technology serve? (May 2019)

Thursday, March 07, 2019

Affective Computing

At #NYTnewwork last month, @Rana el-Kaliouby asked "What if doctors could objectively measure your mental state?" Dr el-Kaliouby is one of the pioneers of affective computing, and is founder of a company called Affectiva. Some of her early work was building apps that helped autistic people to read expressions. She now argues that "artificial emotional intelligence is key to building reciprocal trust between humans and AI".

Affectiva competes with some of the big tech companies (including Amazon, IBM and Microsoft), which now offer "emotional analysis" or "sentiment analysis" alongside facial recognition.

One proposed use of this technology is in the classroom. The idea is to install a webcam in the classroom: the system watches the students, monitors their emotional state, and gives feedback to the teacher in order to maximize student engagement. (For example, Mark Lieberman reports a university trial in Minnesota, based on the Microsoft system. Lieberman includes some sceptical voices in his report, and the trial is discussed further in the 2018 AI Now report.)

So how do such systems work? The computer is trained to recognize a "happy" face by being shown large numbers of images of happy faces. This depends on a team of human coders labelling the images.

And this coding generally relies on a "classical" theory of emotions. Much of this work is credited to a research psychologist called Paul Ekman, who developed a Facial Action Coding System (FACS). Most of these programs use a version called EMFACS, which identifies six or seven universal "hardwired" emotions: anger, contempt, disgust, fear, happiness, sadness and surprise, which can be detected by observing facial muscle movements.

Lisa Feldman Barrett, one of the leading critics of the classical theory, argues that emotions are more complicated, and are a product of one's upbringing and environment. “Emotions are real, but not in the objective sense that molecules or neurons are real. They are real in the same sense that money is real – that is, hardly an illusion, but a product of human agreement.”

It has also been observed that people from different parts of the world, or from different ethnic groups, express emotions differently. (Who knew?) Algorithms that fail to deal with ethnic diversity may be grossly inaccurate and set people up for racial discrimination. For example, in a recent study of two facial recognition software products, one product consistently interpreted black sportsmen as angrier than white sportsmen, while the other labelled the black subjects as contemptuous.

But Affectiva prides itself on dealing with ethnic diversity. When Rana el-Kaliouby spoke to Oscar Schwartz recently, while acknowledging that the technology is not foolproof, she insisted on the importance of collecting "diverse data sets" in order to compile “ethnically based benchmarks” "codified assumptions about how an emotion is expressed within different ethnic cultures". In her most recent video, she also insisted on the importance of diversity of the team building these systems.

Shoshana Zuboff describes sentiment analysis as yet another example of the behavioural surplus that helps Big Tech accumulate what she calls surveillance capital.
"Your unconscious - where feelings form before there are words to express them - must be recast as simply one more sources of raw-material supply for machine rendition and analysis, all of it for the sake of more-perfect prediction. ...  This complex of machine intelligence is trained to isolate, capture, and render the most subtle and intimate behaviors, from an inadvertent blink to a jaw that slackens in surprise for a fraction of a second." (Zuboff 2019, pp 282-3.)
Zuboff relies heavily on a long interview with el-Kaliouby in the New Yorker in 2015, where she expressed optimism about the potential of this technology, not only to read emotions but to affect them.
"I do believe that if we have information about your emotional experiences we can help you be in a more positive mood and influence your wellness."
In her talk last month, without explicitly mentioning Zuboff's book, el-Kaliouby put a strong emphasis on the ethical values of Affectiva, explaining that they have turned down offers of funding from security, surveillance and lie detection, to concentrate on such areas as safety and mental health. I wonder if IBM ("Principles for the Cognitive Era") and Microsoft ("The Future Computed: Artificial Intelligence and its Role in Society") will take the same position?

HT @scarschwartz @raffiwriter



AI Now Report 2018 (AI Now Institute, December 2018)

Rana el-Kaliouby, Teaching Machines to Feel (Bloomberg via YouTube, 20 Sep 2017), Emotional Intelligence (New York Times via YouTube, 6 Mar 2019)

Lisa Feldman Barrett, Psychological Construction: The Darwinian Approach to the Science of Emotion (Emotion Review Vol. 5, No. 4, October 2013) pp 379 –389

Raffi Khatchadourian, We Know How You Feel (New Yorker, 19 January 2015)

Mark Lieberman, Sentiment Analysis Allows Instructors to Shape Course Content around Students’ Emotions, Inside Higher Education , February 20, 2018,

Lauren Rhue, Racial Influence on Automated Perceptions of Emotions (November 9, 2018) http://dx.doi.org/10.2139/ssrn.3281765

Oscar Schwartz, Don’t look now: why you should be worried about machines reading your emotions (The Guardian, 6 Mar 2019)

Shoshana Zuboff, The Age of Surveillance Capitalism (UK Edition: Profile Books, 2019)

Wikipedia: Facial Action Coding System

Related posts: Data and Intelligence Principles from Major Players (June 2018), Shoshana Zuboff on Surveillance Capitalism (February 2019)

Sunday, February 24, 2019

Hidden Functionality

Consumer surveillance was in the news again this week. Apparently Google forgot to tell consumers that there was a cuckoo microphone in the Nest.

So what's new? A few years ago, people were getting worried about a microphone inside the Samsung Smart TV that would eavesdrop your conversations. (HT @Parker Higgins)

But at least in those cases we think we know which corporation is responsible. In other cases, this may not be so clear-cut. For example, who decided to install a camera into the seat-back entertainment systems used by several airlines?

And there is a much more general problem here. It is usually cheaper to use general-purpose hardware than to design special purpose hardware. For this reason, most IoT devices have far more processing power and functionality than they strictly need. This extra functionality carries two dangers. Firstly, if when the device is hacked, the functionality can be coopted for covert or malicious purposes. (For example IoT devices with weak or non-existent security can be recruited into a global botnet.) Secondly, sooner or later someone will think of a justification for switching the functionality on. (In the case of the Nest microphone Google already did, which is what alerted people to the microphone's existence.)

So who is responsible for the failure of a component to act properly, who is responsible for the limitation of purpose, and how can this responsibility be transparently enforced?

Some US politicians have started talking about a technology version of "food labelling" - so that people can avoid products and services if they are sensitive to a particular "ingredient". With physical products, this information would presumably be added to the safety leaflet that you find in the box whenever you buy anything electrical. With online services, this information should be included in the Privacy Notice, which again nobody reads. (There are various estimates about the number of weeks it would take you to read all these notices.) So clearly it is unreasonable to expect the consumer to police this kind of thing.

Just as the supermarkets have a "free from" aisle where they sell all the overpriced gluten-free food, perhaps we can ask electronics retailers to have a "connectivity-free" section, where the products can be guaranteed safe from Ray Ozzie's latest initiative, which is to build devices that connect automatically by default, rather than wait for the user to switch the connectivity on. (Hasn't he heard of privacy and security by default?)

And of course high-tech functionality is no longer limited to products that are obviously electrical. The RFID tags in your clothes may not always be deactivated when you leave the store. And for other examples of SmartClothing, check out my posts on Wearable Tech.




Nick Bastone, Google says the built-in microphone it never told Nest users about was 'never supposed to be a secret' (Business Insider, 19 February 2019)

Nick Bastone, Democratic presidential candidates are tearing into Google for the hidden Nest microphone, and calling for tech gadget 'ingredients' labels (Business Insider, 21 February 2019)

Ina Fried, Exclusive: Ray Ozzie wants to wirelessly connect the world (Axios, 22 February 2019)

Melissa Locker, Someone found cameras in Singapore Airlines’ in-flight entertainment system (Fast Company, 20 February 2019)

Ben Schoon, Nest Secure can now be turned into another Google Assistant speaker for your home (9 to 5 Google, 4 February 2019)

Related posts: Have you got Big Data in your Underwear? (December 2014), Towards the Internet of Underthings (November 2015), Pax Technica - On Risk and Security (November 2017), Outdated Assumptions - Connectivity Hunger (June 2018), Shoshana Zuboff on Surveillance Capitalism (February 2019)

Monday, April 02, 2018

Blockchain and the Edge of Obfuscation - Privacy

According to Wikipedia,
a blockchain is a decentralized, distributed and public digital ledger that is used to record transactions across many computers so that the record cannot be altered retroactively without the alteration of all subsequent blocks and the collusion of the network. (Wikipedia, retrieved 31 March 2018)

Some people are concerned that the essential architecture of blockchain conflicts with the requirements of privacy, especially as represented by the EU General Data Protection Regulation (GDPR), which comes into force on 25th May 2018. In particular, it is not obvious how an immutable blockchain can cope with the requirement to allow data subjects to amend and erase personal data.


Optimists have suggested a number of compromises.

Firstly, the data may be divided between the Blockchain and another data store, known as the Offchain. If the personal data isn't actually held on the blockchain, then it's easier to amend and delete.

Secondly, the underlying meaning of the information can be "completely obfuscated". Researchers at MIT are inventing a 21st century Enigma machine, which will store "secret contracts" instead of the normal "smart contracts".

    Historical note: In the English-speaking world, Alan Turing is often credited with cracking the original Enigma machine, but it was Polish mathematicians who cracked it first.

Thirdly, there may be some wriggle-room in how the word "erasure" is interpreted. Irish entrepreneur Shane Brett thinks that this term may be transposed differently in different EU member states. (This sounds like a recipe for bureaucratic confusion.) It has been suggested that personal data could be "blacklisted" rather than actually deleted.

Finally, as reported by David Meyer, blockchain experts can just argue that GDPR is "already out of date" and hope regulators won't be too "stubborn" to "adjust" the regulation.


But the problem with these compromises is that once you dilute the pure blockchain concept, some of the supposed benefits of blockchain evaporate, and it just becomes another (resource-hungry) data store. Perhaps it is blockchain that is "already out of date".



Vitalik Buterin, Privacy on the Blockchain (Ethereum Blog, 15 January 2016)

Michèle Finck, Blockchains and the GDPR (Oxford Business Law Blog, 13 February 2018)

Josh Hall, How Blockchain could help us take back control of our privacy (The Guardian, 21 March 2018)

David Meyer, Blockchain is on a collision course with EU privacy law (IAPP, 27 February 2018) via The Next Web

Dean Steinbeck, How New EU Privacy Laws Will Impact Blockchain (Coin Telegraph, 30 March 2018)

Wikipedia: Blockchain, Enigma machine


Tuesday, January 09, 2018

Blockchain and the Edge of Disruption - Kodak

Shares in Eastman Kodak more than doubled today following the announcement of the Kodakcoin, "a photocentric cryptocurrency to empower photographers and agencies to take greater control in image rights management".

As Andrew Hill points out, blockchain enthusiasts have often mentioned rights management as one of the more promising applications of digital ledger technology. @willms_ listed half a dozen initiatives back in August 2016, and blockchain investor @alextapscott had a piece about it in the Harvard Business Review last year.

In recent years, Kodak has been held up (probably unfairly) as an example of a company that didn't understand digital. Perhaps to rub this message home, today's story in the Verge is illustrated with stock footage of analogue film. But the bounce in the share price indicates that many investors are willing to give Kodak another chance to prove its digital mettle.

However, some commentators are cynical.



The point of blockchain is to support distributed trust. But the rights management service provided by Kodak doesn't rely on distributed trust, it relies entirely on Kodak. If you trust Kodak, you don't need the blockchain to validate a Kodak-operated service; and if you don't trust Kodak, you probably won't be using the service anyway. So what's the point of blockchain in this example?




Chloe Cornish, Kodak pivot to blockchain sends shares flying (FT, 9 January 2018)

Chris Foxx and Leo Kelion, CES 2018: Kodak soars on KodakCoin and Bitcoin mining plans (BBC News, 9 January 2018)

David Gerard, Kodak’s ICO for a stock photo site that doesn’t exist yet. But the stock price! (10 January 2018)

Jeremy Herron, Kodak Surges After Announcing Plans to Launch Cryptocurrency Called 'Kodakcoin' (Bloomberg, 9 January 2018)

Andrew Hill, Kodak’s convenient click into the blockchain (FT, 9 January 2018)

Shannon Liao, Kodak announces its own cryptocurrency and watches stock price skyrocket (The Verge, 9 January 2018)

Willy Shih, The Real Lessons From Kodak’s Decline (Sloan Management Review, Summer 2016)

Don Tapscott and Alex Tapscott, Blockchain Could Help Artists Profit More from Their Creative Works (HBR, 22 March 2017)

Jessie Willms, Is Blockchain-Powered Copyright Protection Possible? (Bitcoin Magazine, 9 August 2016)



Related posts

Blockchain and the Edge of Disruption - Brexit (September 2017)
Blockchain and the Edge of Disruption - Fake News (September 2017)

Sunday, December 03, 2017

IOT is coming to town

You better watch out



#WatchOut Analysis of smartwatches for children (Norwegian Consumer Council, October 2017). BoingBoing comments that
Kids' smart watches are a security/privacy dumpster-fire.

Charlie Osborne, Smartwatch security fails to impress: Top devices vulnerable to cyberattack (ZDNet, 22 July 2015)

A new study into the security of smartwatches found that 100 percent of popular device models contain severe vulnerabilities.

Matt Hamblen, As smartwatches gain traction, personal data privacy worries mount (Computerworld, 22 May 2015)
Companies could use wearables to track employees' fitness, or even their whereabouts. 


You better not cry

Source: Affectiva


Rana el Kaliouby, The Mood-Aware Internet of Things (Affectiva, 24 July 2015)

Six Wearables to Track Your Emotions (A Plan For Living)

Soon it might be just as common to track your emotions with a wearable device as it is to monitor your physical health. 

Anna Umanenko, Emotion-sensing technology in the Internet of Things (Onyx Systems)


Better not pout


Shaun Moore, Fooling Facial Recognition (Medium, 26 October 2017)

Mingzhe Jiang et al, IoT-based Remote Facial Expression Monitoring System with sEMG Signal (IEEE 2016)

Facial expression recognition is studied across several fields such as human emotional intelligence in human-computer interaction to help improving machine intelligence, patient monitoring and diagnosis in clinical treatment. 


I'm telling you why


Maria Korolov, Report: Surveillance cameras most dangerous IoT devices in enterprise (CSO, 17 November 2016)

Networked security cameras are the most likely to have vulnerabilities. 

Leor Grebler, Why do IOT devices die (Medium, 3 December 2017)

IOT is coming to town


Nick Ismail, The role of the Internet of Things in developing Smart Cities (Information Age, 18 November 2016)


It's making a list And checking it twice


Daan Pepijn, Is blockchain tech the missing link for the success of IoT? (TNW, 21 September 2017)



Gonna find out Who's naughty and nice


Police Using IoT To Detect Crime (Cyber Security Intelligence, 14 Feb 2017)

James Pallister, Will the Internet of Things set family life back 100 years? (Design Council, 3 September 2015)


It sees you when you're sleeping It knows when you're awake


But don't just monitor your sleep. Understand it. The Sense app gives you instant access to everything you could want to know about your sleep. View a detailed breakdown of your sleep cycles, see what happened during your night, discover trends in your sleep quality, and more. (Hello)

Octav G, Samsung’s SLEEPsense is an IoT-enabled sleep tracker (SAM Mobile, 2 September 2015)



It knows if you've been bad or good So be good for goodness sake!


US intelligence chief: we might use the internet of things to spy on you (The Guardian, 9 Feb 2015)

Ben Rossi, IoT and free will: how artificial intelligence will trigger a new nanny state (Information Age, 7 June 2016)





Twitter Version


Related Posts

Pax Technica - The Book (November 2017)
Pax Technica - The Conference (November 2017)
Pax Technica - On Risk and Security (November 2017)
The Smell of Data (December 2017)

Updated 10 December 2017