Thursday, March 07, 2019

Affective Computing

At #NYTnewwork last month, @Rana el-Kaliouby asked "What if doctors could objectively measure your mental state?" Dr el-Kaliouby is one of the pioneers of affective computing, and is founder of a company called Affectiva. Some of her early work was building apps that helped autistic people to read expressions. She now argues that "artificial emotional intelligence is key to building reciprocal trust between humans and AI".

Affectiva competes with some of the big tech companies (including Amazon, IBM and Microsoft), which now offer "emotional analysis" or "sentiment analysis" alongside facial recognition.

One proposed use of this technology is in the classroom. The idea is to install a webcam in the classroom: the system watches the students, monitors their emotional state, and gives feedback to the teacher in order to maximize student engagement. (For example, Mark Lieberman reports a university trial in Minnesota, based on the Microsoft system. Lieberman includes some sceptical voices in his report, and the trial is discussed further in the 2018 AI Now report.)

So how do such systems work? The computer is trained to recognize a "happy" face by being shown large numbers of images of happy faces. This depends on a team of human coders labelling the images.

And this coding generally relies on a "classical" theory of emotions. Much of this work is credited to a research psychologist called Paul Ekman, who developed a Facial Action Coding System (FACS). Most of these programs use a version called EMFACS, which identifies six or seven universal "hardwired" emotions: anger, contempt, disgust, fear, happiness, sadness and surprise, which can be detected by observing facial muscle movements.

Lisa Feldman Barrett, one of the leading critics of the classical theory, argues that emotions are more complicated, and are a product of one's upbringing and environment. “Emotions are real, but not in the objective sense that molecules or neurons are real. They are real in the same sense that money is real – that is, hardly an illusion, but a product of human agreement.”

It has also been observed that people from different parts of the world, or from different ethnic groups, express emotions differently. (Who knew?) Algorithms that fail to deal with ethnic diversity may be grossly inaccurate and set people up for racial discrimination. For example, in a recent study of two facial recognition software products, one product consistently interpreted black sportsmen as angrier than white sportsmen, while the other labelled the black subjects as contemptuous.

But Affectiva prides itself on dealing with ethnic diversity. When Rana el-Kaliouby spoke to Oscar Schwartz recently, while acknowledging that the technology is not foolproof, she insisted on the importance of collecting "diverse data sets" in order to compile “ethnically based benchmarks” "codified assumptions about how an emotion is expressed within different ethnic cultures". In her most recent video, she also insisted on the importance of diversity of the team building these systems.

Shoshana Zuboff describes sentiment analysis as yet another example of the behavioural surplus that helps Big Tech accumulate what she calls surveillance capital.
"Your unconscious - where feelings form before there are words to express them - must be recast as simply one more sources of raw-material supply for machine rendition and analysis, all of it for the sake of more-perfect prediction. ...  This complex of machine intelligence is trained to isolate, capture, and render the most subtle and intimate behaviors, from an inadvertent blink to a jaw that slackens in surprise for a fraction of a second." (Zuboff 2019, pp 282-3.)
Zuboff relies heavily on a long interview with el-Kaliouby in the New Yorker in 2015, where she expressed optimism about the potential of this technology, not only to read emotions but to affect them.
"I do believe that if we have information about your emotional experiences we can help you be in a more positive mood and influence your wellness."
In her talk last month, without explicitly mentioning Zuboff's book, el-Kaliouby put a strong emphasis on the ethical values of Affectiva, explaining that they have turned down offers of funding from security, surveillance and lie detection, to concentrate on such areas as safety and mental health. I wonder if IBM ("Principles for the Cognitive Era") and Microsoft ("The Future Computed: Artificial Intelligence and its Role in Society") will take the same position?

HT @scarschwartz @raffiwriter

AI Now Report 2018 (AI Now Institute, December 2018)

Rana el-Kaliouby, Teaching Machines to Feel (Bloomberg via YouTube, 20 Sep 2017), Emotional Intelligence (New York Times via YouTube, 6 Mar 2019)

Lisa Feldman Barrett, Psychological Construction: The Darwinian Approach to the Science of Emotion (Emotion Review Vol. 5, No. 4, October 2013) pp 379 –389

Raffi Khatchadourian, We Know How You Feel (New Yorker, 19 January 2015)

Mark Lieberman, Sentiment Analysis Allows Instructors to Shape Course Content around Students’ Emotions, Inside Higher Education , February 20, 2018,

Lauren Rhue, Racial Influence on Automated Perceptions of Emotions (November 9, 2018)

Oscar Schwartz, Don’t look now: why you should be worried about machines reading your emotions (The Guardian, 6 Mar 2019)

Shoshana Zuboff, The Age of Surveillance Capitalism (UK Edition: Profile Books, 2019)

Wikipedia: Facial Action Coding System

Related posts: Data and Intelligence Principles from Major Players (June 2018), Shoshana Zuboff on Surveillance Capitalism (February 2019)

Sunday, February 24, 2019

Hidden Functionality

Consumer surveillance was in the news again this week. Apparently Google forgot to tell consumers that there was a cuckoo microphone in the Nest.

So what's new? A few years ago, people were getting worried about a microphone inside the Samsung Smart TV that would eavesdrop your conversations. (HT @Parker Higgins)

But at least in those cases we think we know which corporation is responsible. In other cases, this may not be so clear-cut. For example, who decided to install a camera into the seat-back entertainment systems used by several airlines?

And there is a much more general problem here. It is usually cheaper to use general-purpose hardware than to design special purpose hardware. For this reason, most IoT devices have far more processing power and functionality than they strictly need. This extra functionality carries two dangers. Firstly, if when the device is hacked, the functionality can be coopted for covert or malicious purposes. (For example IoT devices with weak or non-existent security can be recruited into a global botnet.) Secondly, sooner or later someone will think of a justification for switching the functionality on. (In the case of the Nest microphone Google already did, which is what alerted people to the microphone's existence.)

So who is responsible for the failure of a component to act properly, who is responsible for the limitation of purpose, and how can this responsibility be transparently enforced?

Some US politicians have started talking about a technology version of "food labelling" - so that people can avoid products and services if they are sensitive to a particular "ingredient". With physical products, this information would presumably be added to the safety leaflet that you find in the box whenever you buy anything electrical. With online services, this information should be included in the Privacy Notice, which again nobody reads. (There are various estimates about the number of weeks it would take you to read all these notices.) So clearly it is unreasonable to expect the consumer to police this kind of thing.

Just as the supermarkets have a "free from" aisle where they sell all the overpriced gluten-free food, perhaps we can ask electronics retailers to have a "connectivity-free" section, where the products can be guaranteed safe from Ray Ozzie's latest initiative, which is to build devices that connect automatically by default, rather than wait for the user to switch the connectivity on. (Hasn't he heard of privacy and security by default?)

And of course high-tech functionality is no longer limited to products that are obviously electrical. The RFID tags in your clothes may not always be deactivated when you leave the store. And for other examples of SmartClothing, check out my posts on Wearable Tech.

Nick Bastone, Google says the built-in microphone it never told Nest users about was 'never supposed to be a secret' (Business Insider, 19 February 2019)

Nick Bastone, Democratic presidential candidates are tearing into Google for the hidden Nest microphone, and calling for tech gadget 'ingredients' labels (Business Insider, 21 February 2019)

Ina Fried, Exclusive: Ray Ozzie wants to wirelessly connect the world (Axios, 22 February 2019)

Melissa Locker, Someone found cameras in Singapore Airlines’ in-flight entertainment system (Fast Company, 20 February 2019)

Ben Schoon, Nest Secure can now be turned into another Google Assistant speaker for your home (9 to 5 Google, 4 February 2019)

Related posts: Have you got Big Data in your Underwear? (December 2014), Towards the Internet of Underthings (November 2015), Pax Technica - On Risk and Security (November 2017), Outdated Assumptions - Connectivity Hunger (June 2018), Shoshana Zuboff on Surveillance Capitalism (February 2019)

Monday, April 02, 2018

Blockchain and the Edge of Obfuscation - Privacy

According to Wikipedia,
a blockchain is a decentralized, distributed and public digital ledger that is used to record transactions across many computers so that the record cannot be altered retroactively without the alteration of all subsequent blocks and the collusion of the network. (Wikipedia, retrieved 31 March 2018)

Some people are concerned that the essential architecture of blockchain conflicts with the requirements of privacy, especially as represented by the EU General Data Protection Regulation (GDPR), which comes into force on 25th May 2018. In particular, it is not obvious how an immutable blockchain can cope with the requirement to allow data subjects to amend and erase personal data.

Optimists have suggested a number of compromises.

Firstly, the data may be divided between the Blockchain and another data store, known as the Offchain. If the personal data isn't actually held on the blockchain, then it's easier to amend and delete.

Secondly, the underlying meaning of the information can be "completely obfuscated". Researchers at MIT are inventing a 21st century Enigma machine, which will store "secret contracts" instead of the normal "smart contracts".

    Historical note: In the English-speaking world, Alan Turing is often credited with cracking the original Enigma machine, but it was Polish mathematicians who cracked it first.

Thirdly, there may be some wriggle-room in how the word "erasure" is interpreted. Irish entrepreneur Shane Brett thinks that this term may be transposed differently in different EU member states. (This sounds like a recipe for bureaucratic confusion.) It has been suggested that personal data could be "blacklisted" rather than actually deleted.

Finally, as reported by David Meyer, blockchain experts can just argue that GDPR is "already out of date" and hope regulators won't be too "stubborn" to "adjust" the regulation.

But the problem with these compromises is that once you dilute the pure blockchain concept, some of the supposed benefits of blockchain evaporate, and it just becomes another (resource-hungry) data store. Perhaps it is blockchain that is "already out of date".

Vitalik Buterin, Privacy on the Blockchain (Ethereum Blog, 15 January 2016)

Michèle Finck, Blockchains and the GDPR (Oxford Business Law Blog, 13 February 2018)

Josh Hall, How Blockchain could help us take back control of our privacy (The Guardian, 21 March 2018)

David Meyer, Blockchain is on a collision course with EU privacy law (IAPP, 27 February 2018) via The Next Web

Dean Steinbeck, How New EU Privacy Laws Will Impact Blockchain (Coin Telegraph, 30 March 2018)

Wikipedia: Blockchain, Enigma machine

Tuesday, January 09, 2018

Blockchain and the Edge of Disruption - Kodak

Shares in Eastman Kodak more than doubled today following the announcement of the Kodakcoin, "a photocentric cryptocurrency to empower photographers and agencies to take greater control in image rights management".

As Andrew Hill points out, blockchain enthusiasts have often mentioned rights management as one of the more promising applications of digital ledger technology. @willms_ listed half a dozen initiatives back in August 2016, and blockchain investor @alextapscott had a piece about it in the Harvard Business Review last year.

In recent years, Kodak has been held up (probably unfairly) as an example of a company that didn't understand digital. Perhaps to rub this message home, today's story in the Verge is illustrated with stock footage of analogue film. But the bounce in the share price indicates that many investors are willing to give Kodak another chance to prove its digital mettle.

However, some commentators are cynical.

The point of blockchain is to support distributed trust. But the rights management service provided by Kodak doesn't rely on distributed trust, it relies entirely on Kodak. If you trust Kodak, you don't need the blockchain to validate a Kodak-operated service; and if you don't trust Kodak, you probably won't be using the service anyway. So what's the point of blockchain in this example?

Chloe Cornish, Kodak pivot to blockchain sends shares flying (FT, 9 January 2018)

Chris Foxx and Leo Kelion, CES 2018: Kodak soars on KodakCoin and Bitcoin mining plans (BBC News, 9 January 2018)

David Gerard, Kodak’s ICO for a stock photo site that doesn’t exist yet. But the stock price! (10 January 2018)

Jeremy Herron, Kodak Surges After Announcing Plans to Launch Cryptocurrency Called 'Kodakcoin' (Bloomberg, 9 January 2018)

Andrew Hill, Kodak’s convenient click into the blockchain (FT, 9 January 2018)

Shannon Liao, Kodak announces its own cryptocurrency and watches stock price skyrocket (The Verge, 9 January 2018)

Willy Shih, The Real Lessons From Kodak’s Decline (Sloan Management Review, Summer 2016)

Don Tapscott and Alex Tapscott, Blockchain Could Help Artists Profit More from Their Creative Works (HBR, 22 March 2017)

Jessie Willms, Is Blockchain-Powered Copyright Protection Possible? (Bitcoin Magazine, 9 August 2016)

Related posts

Blockchain and the Edge of Disruption - Brexit (September 2017)
Blockchain and the Edge of Disruption - Fake News (September 2017)

Sunday, December 03, 2017

IOT is coming to town

You better watch out

#WatchOut Analysis of smartwatches for children (Norwegian Consumer Council, October 2017). BoingBoing comments that
Kids' smart watches are a security/privacy dumpster-fire.

Charlie Osborne, Smartwatch security fails to impress: Top devices vulnerable to cyberattack (ZDNet, 22 July 2015)

A new study into the security of smartwatches found that 100 percent of popular device models contain severe vulnerabilities.

Matt Hamblen, As smartwatches gain traction, personal data privacy worries mount (Computerworld, 22 May 2015)
Companies could use wearables to track employees' fitness, or even their whereabouts. 

You better not cry

Source: Affectiva

Rana el Kaliouby, The Mood-Aware Internet of Things (Affectiva, 24 July 2015)

Six Wearables to Track Your Emotions (A Plan For Living)

Soon it might be just as common to track your emotions with a wearable device as it is to monitor your physical health. 

Anna Umanenko, Emotion-sensing technology in the Internet of Things (Onyx Systems)

Better not pout

Shaun Moore, Fooling Facial Recognition (Medium, 26 October 2017)

Mingzhe Jiang et al, IoT-based Remote Facial Expression Monitoring System with sEMG Signal (IEEE 2016)

Facial expression recognition is studied across several fields such as human emotional intelligence in human-computer interaction to help improving machine intelligence, patient monitoring and diagnosis in clinical treatment. 

I'm telling you why

Maria Korolov, Report: Surveillance cameras most dangerous IoT devices in enterprise (CSO, 17 November 2016)

Networked security cameras are the most likely to have vulnerabilities. 

Leor Grebler, Why do IOT devices die (Medium, 3 December 2017)

IOT is coming to town

Nick Ismail, The role of the Internet of Things in developing Smart Cities (Information Age, 18 November 2016)

It's making a list And checking it twice

Daan Pepijn, Is blockchain tech the missing link for the success of IoT? (TNW, 21 September 2017)

Gonna find out Who's naughty and nice

Police Using IoT To Detect Crime (Cyber Security Intelligence, 14 Feb 2017)

James Pallister, Will the Internet of Things set family life back 100 years? (Design Council, 3 September 2015)

It sees you when you're sleeping It knows when you're awake

But don't just monitor your sleep. Understand it. The Sense app gives you instant access to everything you could want to know about your sleep. View a detailed breakdown of your sleep cycles, see what happened during your night, discover trends in your sleep quality, and more. (Hello)

Octav G, Samsung’s SLEEPsense is an IoT-enabled sleep tracker (SAM Mobile, 2 September 2015)

It knows if you've been bad or good So be good for goodness sake!

US intelligence chief: we might use the internet of things to spy on you (The Guardian, 9 Feb 2015)

Ben Rossi, IoT and free will: how artificial intelligence will trigger a new nanny state (Information Age, 7 June 2016)

Twitter Version

Related Posts

Pax Technica - The Book (November 2017)
Pax Technica - The Conference (November 2017)
Pax Technica - On Risk and Security (November 2017)
The Smell of Data (December 2017)

Updated 10 December 2017

Saturday, November 25, 2017

Pax Technica - On Risk and Security

#paxtechnica Some further thoughts arising from the @CRASSHlive conference in Cambridge on The Implications of the Internet of Things. (For a comprehensive account, see @LaurieJ's livenotes.)

Many people are worried about the security implications of the Internet of Things. The world is being swamped with cheap internet-enabled devices. As the manufacturing costs, size and power consumption of these devices are being driven down, most producers have neither the expertise not the capacity to build any kind of security into them.

One of the reasons why this problem is increasing is that it is cheaper to use a general-purpose chip than to design a special purpose chip. So most IoT devices have far more processing power and functionality than they strictly need. This extra functionality can be then coopted for covert or malicious purposes. IoT devices may easily be recruited into a global botnet, and devices from some sources may even have been covertly designed for this purpose.

Sensors are bad enough - baby monitors and sex toys. Additional concerns apply to IoT actuators - devices that can produce physical effects. For example, lightbulbs that can flash (triggering epileptic fits), thermostats that can switch on simultaneously across a city (melting the grid), centrifuges that can spin out of control (attempting to sabotage Iran's nuclear capability).

Jon Crowcroft proposed that some of this could be addressed in terms of safety and liability. Safety is a useful driver for increased regulation, and insurance companies will be looking for ways to protect themselves and their corporate customers. While driverless cars generate much discussion, similar questions of safety and liability arise from any cars containing significant quantities of new technology. What if the brake algorithm fails? And given the recent history of cheat software by car manufacturers, can we trust the car not to alter the driver logs in order to evade liability for an accident?

In many cases, the consumer can be persuaded that there are benefits from internet-enabled devices, and these benefits may depend on some level of interoperability between multiple devices. But we aren't equipped to reason about the trade-off between accessibility/usability and security/privacy.

For comparison's sake, consider a retailer who has to decide whether to place the merchandise in locked glass cases or on open shelves. Open shelves will result in more sales, but also more shoplifting. So the retailer locks up the jewelry but not the pencils or the furniture, and this is based on a common-sense balance of value and risk.

But with the Internet of Things, people generally don't have a good enough understanding of value and risk to be able to reason intelligently about this kind of trade-off. Philip Howard advises users to appreciate that devices "have an immediate function that is useful to you and an indirect function that is useful to others" (p255). But just knowing this is not enough. True security will only arise when we have the kind of transparency (or visibility or unconcealment) that I referenced in my previous post.

Related Posts

Defeating the Device Paradigm (October 2015)
Pax Technica - The Book (November 2017)
Pax Technica - The Conference (November 2017)
The Smell of Data (December 2017)
Outdated Assumptions - Connectivity Hunger (June 2018)


Cory Doctorow, The Coming War on General Computation (2011)

Carl Herberger, How hackers will exploit the Internet of Things in 2017 (HelpNet Security, 14 November 2016)

Philip Howard, Pax Technica: How The Internet of Things May Set Us Free or Lock Us Up (Yale 2015)

Laura James, Pax Technica Notes (Session 1Session 2Session 3Session 4)

Holly Robbins, The Path for Transparency for IoT Technologies (ThingsCon, June 2017)

Jack Wallen, Five nightmarish attacks that show the risks of IoT security (ZDNet, 1 June 2017)

Sunday, November 19, 2017

Pax Technica - The Book

In preparation for a @CRASSHlive conference in Cambridge this coming week (Pax Technica: The Implications of the Internet of Things), I've been reading Philip Howard's book, subtitled How The Internet of Things May Set Us Free or Lock Us Up.

I'm going to start my review by quoting Howard's definition of his subject.
The "internet of things" consists of human-made objects with small power supplies, embedded sensors, and addresses on the internet. Most of these networked devices are everyday items that are sending and receiving data about their conditions and our behavior. Unlike mobile phones and computers, devices on these networks are not designed for deliberate social interaction, content creation, or cultural consumption. The bulk of these networked devices simply communicate with other devices: coffeemakers, car parts, clothes, and a plethora of other products. This will not be an internet you experience through a browser. Indeed, as the technology develops, many of us will be barely aware that so many objects around us have power, are sensing, and are sending and receiving data. (xi)
IoT experts may quibble with some of the details of this definition, but it broadly makes sense.

My first problem with Howard's book is that he doesn't stick to this definition. He talks a lot about devices in general, but most of the time he is talking about other kinds of devices, such as mobile phones and chatbots. The book contains a wealth of reporting on the disruption caused by digital networks. But much of this is not about the internet of things as he defines it, but about social media, big data, fake news and other internet phenemena. These are important topics to be sure, which have been excellently addressed by other sociologists such as Zeynep Tufekci, as well as in the previous CRASSH conference Power Switch. But the book claims to be about something different.

My second problem with Howard's book is that he doesn't really question the notion of "device". There is a considerable literature on the philosophy of technology going back to Heidegger via Herbert Dreyfus and Albert Borgmann. In his Question Concerning Technology, Heidegger wrote
In our time, things are not even regarded as objects, because their only important quality has become their readiness for use. Today all things are being swept together into a vast network in which their only meaning lies in their being available to serve some end that will itself also be directed towards getting everything under control.

Albert Borgmann introduced the notion of the Device Paradigm to analyse the way "technological devices" are perceived and consumed in modern society. In many situations, there is a fetish of the "device", obscuring the network infrastructure that is required to deliver the affordance or "commodity" of the device.

One of the consequences of this is that discussion of the internet of things tends to focus on the "things" rather than the "internet of". At a healthcare event I attended a couple of years ago, various technology companies were exhibiting a range of wearable or implantable devices - some monitoring, some actively intervening. A patient with multiple conditions might be wearing several such devices. But these devices don't and currently cannot communicate with one other (as suggested by Howard's definition quoted above). Instead, as Howard acknowledges is the case for most devices, they are "designed to report data back to designers, manufacturers, and third party analysts" (p211) - either directly or via an app on the user's smartphone. So that's basically a hub and spoke network.

To thrive in the Pax Technica, Howard advises, "you can be a more sophisticated user ... you can be a functionally prominent political actor by thoughtfully managing your internet of things" (p254-5). But what would that entail? Holly Robbins talks about a language to unmask the complexity of IoT. In 1986, before I had read any Heidegger or Borgmann, I called this Visibility. Heidegger calls it Unconcealment (Unverborgenheit).

Borgmann's own approach is based on what he calls focal things and practices. As Wendt argues, the Internet of Things must create meaningful interactions in order to succeed.

... something found in all of us: the need to take an active role in the world, to shape and design things, and to form rituals around activities. This is not to say we can’t do these things with smart objects, but it does underscore the importance of conscious, embodied interaction with things. The Internet of Things will only be successful if products are designed with purpose.

So I'm hoping that these aspects of the Internet of Things will be discussed on Friday ...

Related Posts

Understanding the Value Chain of the Internet of Things (June 2015)
Some marketing experts are seeing the Internet of Things as a way of reasserting control over the consumer. 

Defeating the Device Paradigm (Oct 2015)
The Internet of Things is not a random collection of devices. It is a safety-critical system of systems, and must be understood (and regulated) as such. But it often suits certain commercial interests to focus our attention on the devices and away from the rest of the system. This is related to what Borgmann calls the Device Paradigm. 

Towards the Internet of Underthings (Nov 2015)
We are now encouraged to account for everything we do: footsteps, heartbeats, posture. Until recently this kind of micro-attention to oneself was regarded as slightly obsessional, nowadays it seems to be perfectly normal. And of course these data are collected, and sent to the cloud, and turned into someone else's big data. (Good luck with those privacy settings, by the way.)
Pax Technica - The Conference (November 2017)
Pax Technica - On Risk and Security (November 2017)


Albert Borgmann, Technology and the Character of Contemporary Life (Chicago, 1984)

Oliver Christ, Martin Heidegger‘s Notions of World and Technology in the Internet of Things age (Asian Journal of Computer and Information Systems, Volume 03– Issue 02, April 2015)

Philip Howard, Pax Technica: How The Internet of Things May Set Us Free or Lock Us Up (Yale 2015)

Holly Robbins, The Path for Transparency for IoT Technologies (ThingsCon, June 2017)

Zeynep Tufekci, Engineering the public: Big data, surveillance and computational politics (First Monday, Volume 19, Number 7, 7 July 2014)

Richard Veryard, The Role of Visibility in Systems (Human Systems Management 6, 1986)

Thomas Wendt, Internet of Things and the Work of the Hands (UX Magazine, 12 March 2014)

Wikipedia: Device Paradigm