Showing posts with label ethics. Show all posts
Showing posts with label ethics. Show all posts

Sunday, April 10, 2022

Lie Detectors at Airports

@jamesvgingerich reports that the EU is putting lie detector robots on its borders. @Abebab is horrified.

 

There are several things worth noting here.

Firstly, lie detectors work by detecting involuntary actions (eye movements, heart rate) that are thought to be a proxy for mendacity. But there are often alternative explanations for these actions, and so the interpretation of these is highly problematic. See my post on Memory and the Law (June 2008)

Secondly, there is a lot of expensive and time-wasting technology installed at airports already, which has dubious value in detecting genuine threats, but may help to make people feel safer. Bruce Schneier calls this Security Theatre. See my posts on the False Sense of Security (June 2019) and Identity, Privacy and Security at Heathrow Terminal Five (March 2008).

What is more important is to consider the consequences of adding this component (whether reliable or otherwise) to the larger system. In my post Listening for Trouble (June 2019), I discussed the use of Aggression Detection microphones in US schools, following an independent study that was carried out with active collaboration from the supplier of the equipment. Obviously this kind of evaluation requires some degree of transparency.

Most important of all is the ethical question. Is this technology biased against certain categories of subject, and what are the real-world consequences of being falsely identified by this system? Is having a human in the loop sufficient protection against the dangers of algorithmic profiling? See Algorithmic Bias (March 2021).

Given the inaccuracy of detection, there may be a significant rate of false positives and false negatives. False positives affect the individual concerned, suffering consequences ranging from inconvenience and delay to much worse. False negatives mean that a person has got away with an undetected lie, so this has consequences for society as a whole. 

How much you think this matters depends on what you think they might be lying about, and how important this is. For example, it may be quicker to say you packed your suitcase yourself and it hasn't been out of your sight, even if this is not strictly true, because any other answer may trigger loads of other time-wasting questions. However, other lies may be more dangerous ...

 


For more details on the background of this initiative, see

Matthias Monroy, EU project iBorderCtrl: Is the lie detector coming or not? (26 April 2021)

Tuesday, April 27, 2021

The Allure of the Smart City

The concept of smart city seems to encompass a broad range of sociotechnical initiatives, including community-based healthcare, digital electricity, housing affordability and sustainability, next-generation infrastructure, noise pollution, quality of air and water, robotic furniture, transport and mobility, and urban planning. The smart city is not a technology as such, more like an assemblage of technologies.

Within this mix, there is often a sincere attempt to address some serious social and environmental concerns, such as reducing the city's carbon footprint. However, Professor Rob Kitchen notes a tendency towards greenwashing or even ethics washing.

Kitchen also raises concerns about civic paternalism - city authorities and their tech partners knowing what's best for the citizenry.

On the other hand, John Koetsier makes the point that If-We-Don't-Do-It-The-Chinese-Will. This point was also recently made by Jeremy Fleming in his 2021 Vincent Briscoe lecture. (See my post on the Invisibility of Infrastructure.)

Meanwhile, here is a small and possibly unrepresentative sample of Smart City initiatives in the West that have reached the press recently.

  • Madrid with IBM
  • Portland with Google Sidewalk - cancelled Feb 2021
  • San Jose with Intel - pilot programme
  • Toronto with Google Sidewalk (Quayside) - cancelled May 2020

 


 

Daniel Doctoroff, Why we’re no longer pursuing the Quayside project — and what’s next for Sidewalk Labs (Sidewalk Talk, 7 May 2020)

Rob Kitchen, The Ethics of Smart Cities (RTE 27 April 2019)

John Koetsier, 9 Things We Lost When Google Canceled Its Smart Cities Project In Toronto (Forbes, 13 May 2020) 

Ryan Mark and Gregory Anya, Ethics of Using Smart City AI and Big Data: The Case of Four Large European Cities (Orbit, Vol 2/2, 2019)

Juan Pedro Tomás, Smart city case study: San Jose, California (Enterprise IOT Insights, 5 October 2017)

Jane Wakefield, The Google city that has angered Toronto (BBC News, 18 May 2019), Google-linked smart city plan ditched in Portland (BBC News, 23 February 2021)

 

See also IOT is coming to town (December 2017), On the invisibility of infrastructure (April 2021)

 

Tuesday, July 16, 2019

Nudge Technology

People are becoming aware of the ways in which AI and big data can be used to influence people, in accordance with Nudge Theory. Individuals can be nudged to behave in particular ways, and large-scale social systems (including elections and markets) can apparently be manipulated. In other posts, I have talked about the general ethics of nudging systems. This post will concentrate on the technological aspects.

Technologically mediated nudges are delivered by a sociotechnical system we could call a Nudge System. This system might contain several different algorithms and other components, and may even have a human-in-the-loop. Our primary concern here is about the system as a whole.

As an example, I am going to consider a digital advertisement in a public place, which shows anti-smoking messages whenever it detects tobacco smoke.

Typically, a nudge system would perform several related activities.

1. There would be some mechanism for "reading" the situation. For example, detecting the events that might trigger a nudge, as well as determining the context. This might be a simple sense-and-respond or it might include some more sophisticated analysis, using some kind of model. There is typically an element of surveillance here. In our example, let us imagine that the system is able to distinguish different brands of cigarette, and determine how many people are smoking in its vicinity.

2. Assuming that there was some variety in the nudges produced by the system, there would be a mechanism for selecting or constructing a specific nudge, using a set of predefined nudges or nudge templates. For example, different anti-smoking messages for the expensive brands versus the cheap brands. Combined with other personal data, the system might even be able to name and shame the smokers.

3. There would then be a mechanism for delivering or otherwise executing the nudge. For example private (to a person's phone) or public (via a display board). We might call this the nudge agent. In some cases, the nudge may be delivered by a human, but prompted by an intelligent system. If the nudge is publicly visible, this could allow other people to infer the circumstances leading to the nudge - therefore a potential breach of privacy. (For example, letting your friends and family know that you were having a sneaky cigarette, when you had told them that you had given up.)

4. In some cases, there might be a direct feedback loop, giving the system immediate data about the human response to the nudge. Obviously this will not always be possible. Nevertheless, we would expect the system to retain a record of the delivered nudges, for future analysis. To support multiple feedback tempos (as discussed in my work on Organizational Intelligence) there could be multiple components performing the feedback and learning function. Typically, the faster loops would be completely automated (autonomic) while the slower loops would have some human interaction.

There would typically be algorithms to support each of these activities, possibly based on some form of Machine Learning, and there is the potential for algorithmic bias at several points in the design of the system, as well as various forms of inaccuracy (for example false positives, where the system incorrectly detects tobacco smoke). More information doesn't always mean better information - for example, someone might design a sensor that would estimate the height of the smoker, in order to detect underage smokers - but this obviously introduces new possibilities of error.

In many cases, there will be a separation between the technology engineers who build systems and components, and the social engineers who use these systems and components to produce some commercial or social effects. This raises two different ethical questions.

Firstly, what does responsible use of nudge technology look like - in other words, what are the acceptable ways that nudge technology can be deployed. What purposes, what kind of content, the need for testing and continuous monitoring to detect any signals of harm or bias, and so on. Should the nudge be private to the nudgee, or could it be visible to others? What technical and organizational controls should be in place before the nudge technology is switched on?

And secondly, what does responsible nudge technology look like - in other words, one that can be used safely and reliably, with reasonable levels of transparency and user control.

We may note that nudge technologies can be exploited by third parties with a commercial or political intent. For example, there are constant attempts to trick or subvert the search and recommendation algorithms used by the large platforms, and Alex Hern recently reported on Google's ongoing battle to combat misinformation and promotion of extreme content. So one of the requirements of responsible nudge technology is being Robust Against Manipulation.

We may also note that if there is any bias anywhere, this may either be inherent in the design of the nudge technology itself, or may be introduced by the users of the nudge technology when customizing it for a specific purpose. For example, nudges may be deliberately worded as "dog whistles" - designed to have a strong effect on some subjects while being ignored by others - and this can produce significant and possibly unethical bias in the working of the system. But this bias is not in the algorithm but in the nudge templates, and there may be other ways in which bias is relevant to nudging in general, so the question of algorithmic bias is not the whole story.



Alex Hern, Google tweaked algorithm after rise in US shootings (Guardian, 2 July 2019)

Wikipedia: Nudge Theory

Related posts: Organizational Intelligence in the Control Room (October 2010), On the Ethics of Technologically Mediated Nudge (May 2019), The Nudge as a Speech Act (May 2019), Algorithms and Governmentality (July 2019), Robust Against Manipulation (July 2019)


Wednesday, June 26, 2019

Listening for Trouble

Many US schools and hospitals have installed Aggression Detection microphones that claim to detect sounds of aggression, thus allowing staff or security personnel to intervene to prevent violence. Sound Intelligence, the company selling the system, claims that the detector has helped to reduce aggressive incidents. What are the ethical implications of such systems?

ProPublica recently tested one such system, enrolling some students to produce a range of sounds that might or might not trigger the alarm. They also talked to some of the organizations using it, including a hospital in New Jersey that has now decommissioned the system, following a trial that (among other things) failed to detect a seriously agitated patient. ProPublica's conclusion was that the system was "less than reliable".

Sound Intelligence is a Dutch company, which has been fitting microphones into street cameras for over ten years, in the Netherlands and elsewhere in Europe. This was approved by the Dutch Data Protection Regulator on the argument that the cameras are only switched on after someone screams, so the privacy risk is reduced.

But Dutch cities can be pretty quiet. As one of the developers admitted to the New Yorker in 2008, "We don’t have enough aggression to train the system properly". Many experts have questioned the validity of installing the system in an entirely different environment, and Sound Intelligence refused to reveal the source of the training data, including whether the data had been collected in schools.

In theory, a genuine scream can be identified by a sound pattern that indicates a partial loss of control of the vocal chords, although the accurate detection of this difference can be compromised by audio distortion (known as clipping). When people scream on demand, they protect their vocal chords and do not produce the same sound. (Actors are taught to simulate screams, but the technology can supposedly tell the difference.) So it probably matters whether the system is trained and tested using real screams or fake ones. (Of course, one might have difficulty persuading an ethics committee to approve the systematic production and collection of real screams.)

Can any harm can be caused by such technologies? Apart from the fact that schools may be wasting money on stuff that doesn't actually work, there is a fairly diffuse harm of unnecessary surveillance. Students may learn to suppress all varieties of loud noises, including sounds of celebration and joy. There may also be opportunities for the technologies to be used as a tool for harming someone - for example, by playing a doctored version of a student's voice in order to get that student into trouble. Or if the security guard is a bit trigger-happy, killed.

Technologies like this can often be gamed. For example, a student or ex-student planning an act of violence would be aware of the system and would have had ample opportunity to test what sounds it did or didn't respond to.

Obviously no technology is completely risk-free. If a technology provides genuine benefits in terms of protecting people from real threats, then this may outweigh any negative side-effects. But if the benefits are unproven or imaginary, as ProPublica suggests, this is a more difficult equation.

ProPublica quoted a school principal from a quiet leafy suburb, who justified the system as providing "a bit of extra peace of mind". This could be interpreted as a desire to reassure parents with a false sense of security. Which might be justifiable if it allowed children and teachers to concentrate on schoolwork rather than worrying unnecessarily about unlikely scenarios, or pushing for more extreme measures such as arming the teachers. (But there is always an ethical question mark over security theatre of this kind.)

But let's go back to the nightmare scenario that the system is supposed to protect against. If a school or hospital equipped with this system were to experience a mass shooting incident, and the system failed to detect the incident quickly enough (which on the ProPublica evidence seems quite likely), the incident investigators might want to look at sound recordings from the system. Fortunately, these microphones "allow administrators to record, replay and store those snippets of conversation indefinitely". So that's alright then.

In addition to publishing its findings, ProPublica also published the methodology used for testing and analysis. The first point to note is that this was done with the active collaboration from the supplier. It seems they were provided with good technical information, including the internal architecture of the device and the exact specification of the microphone used. They were able to obtain an exactly equivalent microphone, and could rewire the device and intercept the signals. They discarded samples that had been subject to clipping.

The effectiveness of any independent testing and evaluation is clearly affected by the degree of transparency of the solution, and the degree of cooperation and support provided by the supplier and the users. So this case study has implications, not only for the testing of devices, but also for transparency and system access.




Jack Gillum and Jeff Kao, Aggression Detectors: The Unproven, Invasive Surveillance Technology Schools Are Using to Monitor Students (ProPublica, 25 June 2019)

Jeff Kao and Jack Gillum, Methodology: How We Tested an Aggression Detection Algorithm (ProPublica, 25 June 2019)

John Seabrook, Hello, Hal (New Yorker, 16 June 2008)

P.W.J. van Hengel and T.C. Andringa, Verbal aggression detection in complex social environments (IEEE Conference on Advanced Video and Signal Based Surveillance, 2007)

Groningen makes “listening cameras" permanent (Statewatch, Vol 16 no 5/6, August-December 2006)

Wikipedia: Clipping (Audio)

Related posts: Affective Computing (March 2019), False Sense of Security (June 2019)


Updated 28 June 2019. Thanks to Peter Sandman for pointing out a lack of clarity in the previous version.

Saturday, May 11, 2019

Whom does the technology serve?

When regular hyperbole isn't sufficient, writers often refer to new technologies as The Holy Grail of something or other. As I pointed out in my post on Chatbot Ethics, this has some important ethical implications.

Because in the mediaeval Parsifal legend, at a key moment in the story, our hero fails to ask the critical question: Whom Does the Grail Serve? And when technologists enthuse about the latest inventions, they typically overlook the same question: Whom Does the Technology Serve?

In a new article on driverless cars, Dr Ashley Nunes of MIT, argues that academics have allowed themselves to be distracted by versions of the Trolley Problem (Whom Shall the Vehicle Kill?), and neglected some much more important ethical questions.

For one thing, Nunes argues that the so-called autonomous vehicles are never going to be fully autonomous. There will always be ways of controlling cars remotely, so the idea of a lone robot struggling with some ethical dilemma is just philosophical science fiction. Last year, he told Jesse Dunietz that he hasn't yet found a safety-critical transport system without real-time human oversight.

And in any case, road safety is never about one car at a time, it is about deconfliction - which means cars avoiding each other as well as pedestrians. With human driving, there are multiple deconfliction mechanisms to allow many vehicles to occupy the same space without hitting each other. These include traffic signals, road markings and other conventions indicating right of way, signals (including honking and flashing lights) to negotiate between drivers, or for drivers to show that they are willing to wait for a pedestrian to cross the road in front of them. Equivalent mechanisms will be required to enable so-called autonomous vehicles to provide a degree of transparency of intention, and therefore trust. (See Matthews et al. See also Applin and Fischer). See my post on the Ethics of Interoperability.


But according to Nunes, "the most important question that we should be asking about this technology" is "Who stands to gain from its life-saving potential?" Because "if those who most need it don’t have access, whose lives would we actually be saving?"

In other words, Whom Does The Grail Serve?




Sally Applin and Michael Fischer, Applied Agency: Resolving Multiplexed Communication in Automobiles (Adjunct Proceedings of the 4th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI '12), October 17–19, 2012, Portsmouth, NH, USA) HT @AnthroPunk

Rachel Coldicutt, Tech ethics, who are they good for? (8 June 2018)

Jesse Dunietz, Despite Advances in Self-Driving Technology, Full Automation Remains Elusive (Undark, 22 November 2018) HT @SafeSelfDrive

Ashley Nunes, Driverless cars: researchers have made a wrong turn (Nature Briefing, 8 May 2019) HT @vdignum @HumanDriving

Milecia Matthews, Girish Chowdhary and Emily Kieson, Intent Communication between Autonomous Vehicles and Pedestrians (2017) 

Eric A Taub, How Jaywalking Could Jam Up the Era of Self-Driving Cars (New York Times, 1 August 2019)

Wikipedia: Trolley Problem


Related posts

For Whom (November 2006), Defeating the Device Paradigm (October 2015), Towards Chatbot Ethics - Whom Does the Chatbot Serve?  (May 2019), Ethics of Interoperability (May 2019), The Road Less Travelled - Whom Does the Algorithm Serve? (June 2019), Jaywalking (November 2019)

Sunday, May 05, 2019

Towards Chatbot Ethics

When over-enthusiastic articles describe chatbotics as the Holy Grail (for digital marketing or online retail or whatever), I should normally ignore this as the usual hyperbole. But in this case, I'm going to take it literally. Let me explain.

As followers of the Parsifal legend will know, at a critical point in the story Parsifal fails to ask the one question that matters: "Whom does the Grail serve?"

And anyone who wishes to hype chatbots as some kind of "holy grail" must also ask the same question: "Whom does the Chatbot serve?" IBM puts this at the top of its list of ethical questions for chatbots, as does @ashevat (formerly with Slack).

To the extent that a chatbot is providing information and advice, it is subject to many of the same ethical considerations as any other information source - is the information complete, truthful and unbiased, or does it serve the information provider's commercial interest? Perhaps the chatbot (or rather its owner) is getting a commission if you eat at the recommended restaurant, just as hotel concierges have always done. A restaurant review in an online or traditional newspaper may appear to be independent, but restaurants have many ways of rewarding favourable reviews even without cash changing hands. You might think it is ethical for this to be transparent.

But an important difference between a chatbot and a newspaper article is that the chatbot has a greater ability to respond to the particular concerns and vulnerabilities of the user. Shiva Bhaska discusses how this power can be used for manipulation and even intimidation. And making sure the user knows that they are talking to a bot rather than a human does not guard against an emotional reaction: Joseph Weizenbaum was one of the first in the modern era to recognize this.

One area where particularly careful ethical scrutiny is required is the use of chatbots for mental health support. Obviously there are concerns about efficacy and safety as well as privacy, and such systems need to undergo clinical trials for efficacy and potential adverse outcomes, just like any other medical intervention. Kira Kretzschmar et al argue that it is also essential that these platforms are specifically programmed to discourage over-reliance, and that users are encouraged to seek human support in the case of an emergency.


Another ethical problem with chatbots is related to the Weasley doctrine (named after Arthur Weasley in Harry Potter and the Chamber of Secrets):
"Never trust anything that can think for itself if you can't see where it keeps its brain."
Many people have installed these curious cylindrical devices in their homes, but is that where the intelligence is actually located? When a private conversation was accidentally transmitted from Portland to Seattle, engineers at Amazon were able to inspect the logs, coming up with a somewhat implausible explanation as to how this might have occurred. Obviously this implies a lack of boundaries between the device and the manufacturer. And as @geoffreyfowler reports, chatbots don't only send recordings of your voice back to Master Control, they also send status reports from all your other connected devices.

Smart home, huh? Smart for whom? Transparency for whom? Or to put it another way, whom does the chatbot serve?





Shiva Bhaskar, The Chatbots That Will Manipulate Us (30 June 2017)

Geoffrey A. Fowler, Alexa has been eavesdropping on you this whole time (Washington Post, 6 May 2019) HT@hypervisible

Sidney Fussell, Behind Every Robot Is a Human (The Atlantic, 15 April 2019)

Tim Harford, Can a computer fool you into thinking it is human? (BBC 25 September 2019)

Gary Horcher, Woman says her Amazon device recorded private conversation, sent it out to random contact (25 May 2018)

Kira Kretzschmar et al, Can Your Phone Be Your Therapist? Young People’s Ethical Perspectives on the Use of Fully Automated Conversational Agents (Chatbots) in Mental Health Support (Biomed Inform Insights, 11, 5 March 2019)

Trips Reddy, The code of ethics for AI and chatbots that every brand should follow (IBM 15 October 15, 2017)

Amir Shevat, Hard questions about bot ethics (Slack Platform Blog, 12 October 2016)

Tom Warren, Amazon explains how Alexa recorded a private conversation and sent it to another user (The Verge, 24 May 2018)

Joseph Weizenbaum, Computer Power and Human Reason (WH Freeman, 1976)


Related posts: Understanding the Value Chain of the Internet of Things (June 2015), Whom does the technology serve? (May 2019), The Road Less Travelled (June 2019), The Allure of the Smart Home (December 2019), The Sad Reality of Chatbotics (December 2021)

updated 4 October 2019

Thursday, March 07, 2019

Affective Computing

At #NYTnewwork in February 2019, @Rana el-Kaliouby asked What if doctors could objectively measure your mental state? Dr el-Kaliouby is one of the pioneers of affective computing, and is founder of a company called Affectiva. Some of her early work was building apps that helped autistic people to read expressions. She now argues that artificial emotional intelligence is key to building reciprocal trust between humans and AI.

Affectiva competes with some of the big tech companies (including Amazon, IBM and Microsoft), which now offer emotional analysis or sentiment analysis alongside facial recognition.

One proposed use of this technology is in the classroom. The idea is to install a webcam in the classroom: the system watches the students, monitors their emotional state, and gives feedback to the teacher in order to maximize student engagement. (For example, Mark Lieberman reports a university trial in Minnesota, based on the Microsoft system. Lieberman includes some sceptical voices in his report, and the trial is discussed further in the 2018 AI Now report.)

So how do such systems work? The computer is trained to recognize a happy face by being shown large numbers of images of happy faces. This depends on a team of human coders labelling the images.

And this coding generally relies on a classical theory of emotions. Much of this work is credited to a research psychologist called Paul Ekman, who developed a Facial Action Coding System (FACS). Most of these programs use a version called EMFACS, which detects six or seven supposedly universal emotions: anger, contempt, disgust, fear, happiness, sadness and surprise. The idea is that because these emotions are hardwired, they can be detected by observing facial muscle movements.

Lisa Feldman Barrett, one of the leading critics of the classical theory, argues that emotions are more complicated, and are a product of one's upbringing and environment. Emotions are real, but not in the objective sense that molecules or neurons are real. They are real in the same sense that money is real – that is, hardly an illusion, but a product of human agreement.

It has also been observed that people from different parts of the world, or from different ethnic groups, express emotions differently. (Who knew?) Algorithms that fail to deal with ethnic diversity may be grossly inaccurate and set people up for racial discrimination. For example, in a recent study of two facial recognition software products, one product consistently interpreted black sportsmen as angrier than white sportsmen, while the other labelled the black subjects as contemptuous.

But Affectiva prides itself on dealing with ethnic diversity. When Rana el-Kaliouby spoke to Oscar Schwartz recently, while acknowledging that the technology is not foolproof, she insisted on the importance of collecting diverse data sets in order to compile ethnically based benchmarks ... codified assumptions about how an emotion is expressed within different ethnic cultures. In her most recent video, she also insisted on the importance of diversity of the team building these systems.

Shoshana Zuboff describes sentiment analysis as yet another example of the behavioural surplus that helps Big Tech accumulate what she calls surveillance capital.
Your unconscious - where feelings form before there are words to express them - must be recast as simply one more sources of raw-material supply for machine rendition and analysis, all of it for the sake of more-perfect prediction. ...  This complex of machine intelligence is trained to isolate, capture, and render the most subtle and intimate behaviors, from an inadvertent blink to a jaw that slackens in surprise for a fraction of a second.
Zuboff 2019, pp 282-3
Zuboff relies heavily on a long interview with el-Kaliouby in the New Yorker in 2015, where she expressed optimism about the potential of this technology, not only to read emotions but to affect them.
I do believe that if we have information about your emotional experiences we can help you be in a more positive mood and influence your wellness.
In her talk last month, without explicitly mentioning Zuboff's book, el-Kaliouby put a strong emphasis on the ethical values of Affectiva, explaining that they have turned down offers of funding from security, surveillance and lie detection, to concentrate on such areas as safety and mental health. I wonder if IBM ("Principles for the Cognitive Era") and Microsoft ("The Future Computed: Artificial Intelligence and its Role in Society") will take the same position?

HT @scarschwartz @raffiwriter



AI Now Report 2018 (AI Now Institute, December 2018)

Bernd Bösel and Serjoscha Wiemer (eds), Affective Transformations: Politics—Algorithms—Media (Meson Press, 2020)

Hannah Devlin, AI systems claiming to 'read' emotions pose discrimination risks (Guardian,16 February 2020)

Rana el-Kaliouby, Teaching Machines to Feel (Bloomberg via YouTube, 20 Sep 2017), Emotional Intelligence (New York Times via YouTube, 6 Mar 2019)

Lisa Feldman Barrett, Psychological Construction: The Darwinian Approach to the Science of Emotion (Emotion Review Vol. 5, No. 4, October 2013) pp 379 –389

Douglas Heaven, Why faces don't always tell the truth about feelings (Nature, 26 February 2020)

Raffi Khatchadourian, We Know How You Feel (New Yorker, 19 January 2015)

Mark Lieberman, Sentiment Analysis Allows Instructors to Shape Course Content around Students’ Emotions, Inside Higher Education , February 20, 2018,

Lauren Rhue, Racial Influence on Automated Perceptions of Emotions (November 9, 2018) http://dx.doi.org/10.2139/ssrn.3281765

Oscar Schwartz, Don’t look now: why you should be worried about machines reading your emotions (The Guardian, 6 Mar 2019)

Shoshana Zuboff, The Age of Surveillance Capitalism (UK Edition: Profile Books, 2019)

Wikipedia: Facial Action Coding System

Related posts: Linking Facial Expressions (September 2009), Data and Intelligence Principles from Major Players (June 2018), Shoshana Zuboff on Surveillance Capitalism (February 2019), Listening for Trouble (June 2019)


Links added February 2020

Wednesday, June 08, 2011

Ethics of Risk in Public Sector IT

@tonyrcollins via @glynmoody and @Mark_Antony asks Should winning bidders tell if they suspect a new contract is undeliverable? (8 June 2011) and raises some excellent ethical points about public sector procurement.

One of the functions of good journalism is to hold people and organizations to account. Tony fishes out a speech given in 2004 by Sir Christopher Bland, then chairman of BT, in which he acknowledged incomplete success in previous ventures, and admitted the extraordinary challenges involved in the NPfIT, for which BT had just won three contracts then valued at over £2bn.

There is obviously a difference between something's being extremely difficult and its being impossible. BT executives can fairly claim that they were always open about the chance that it was going to be difficult, and that they didn't know for sure that it was going to be impossible. But at the same time, there is an asymmetry of information here - the supplier is presumably in a better position to assess certain classes of risk than the customer. (Meanwhile, there may be other classes of risk that the customer should know more about than the supplier.)

In my opinion, the ethical issues here are not to do with deliberate concealment of known facts, but of misleading or inadequate assessment of shared risk. The key word in Tony's headline is the word "suspect". So what are the ethics of doubt?