Sunday, December 29, 2019

The Allure of the Smart Home

What exactly is a smart home, and why would I want to live in one?

I don't think the smart home concept is just about having the latest cool technology or containing some smart stuff. And many of the most commonly discussed examples of smart technology in the home seem to be merely modest improvements on earlier technologies, rather than something entirely new.

Let's look at some smart devices you might have in your home. Programmable thermostats have been available for ages, adjusting heating and/or air conditioning to maintain a comfortable temperature at certain times of day. Modern heating systems can now offer separate controls for each room, and be programmed to reduce your total energy consumption: such systems are typically marketed as intelligent systems. So whatever smart technology is doing in this area looks more like useful improvement than radical change.

Or how about remote control functionality. Remote control devices have been around for a long time, especially for couch potatoes who wished to change TV channels without the effort of walking a few feet across the room. Now we have voice-activated controls, for people who can't even be bothered to search under the cushions for the remote control device. Voice activation may be a bit more technologically sophisticated than pushing buttons, and some artificial intelligence may be required to recognize and interpret the voice commands, but it's basically the same need that is being satisfied here.

Or how about a chatbot to answer your questions? In most cases, the answers aren't hard-wired into the device, but are pulled from some source outside the home. So the chatbot is merely a communication device, as if you had a telephone hotline to Stephen Fry only faster and always available, like several million Stephen Fry clones working in parallel around the clock. (You may choose any other  knowledgeable and witty celebrity if you prefer.)

And the idea that having a chatbot device in your home makes the home itself smart is like thinking that having a smartphone in the pocket of your trousers turns them into smart trousers. Or that having Stephen Fry's phone number attached to your fridge door turns it into a smart fridge.

Of course, a smart system may have multiple components - different classes of device. You might install an intelligent security system, using cameras and other devices, to recognize and admit your children and pets, while keeping the home safe from unwanted visitors.

But surely the concept of smart home means more than just having a number of smart parts or subsystems, it implies that the home itself manifests some intelligence at the whole-system level. The primary requirement seems to be that these smart devices are connected, not to the world outside the home, but to each other, enabling them to orchestrate things. Not just home automation, but seamless home automation.

For example, suppose I make my home responsible for getting me to work on time. My home computer could monitor the traffic reports or disruption on public transport, check with my car whether I needed to allow extra time to refuel, send a message to my alarm clock to wake me up at the optimal time, having also instructed the heating system when to switch the boiler on.

Assuming I do not wish my movements to be known in advance by burglars and kidnappers, all of these messages need to be secure against eavesdropping. It isn't obvious to me why it would be necessary to transmit these messages via servers outside my house. Yes I know it's called the internet of things, but does that mean everything has to go via the internet?

Well yes apparently it does, if we follow the recently announced Connected Home over IP (CHIP) standards, to be developed jointly by Amazon, Apple, Google, and most of the other key players in the smart home market.

Many of those who commented on the Register article raised concerns about encryption. It seems unlikely at this point that the tech giants will be keen on end-end encryption, because surely they are going to want to feed your data into the hungry mouths of their machine learning starlings. So whatever security measures are included in the CHIP standards, they will probably represent a compromise, appearing to take security seriously but not seriously impeding the commercial and strategic interests of the vendors. Smart for them, not necessarily for us.

Sometimes it seems that the people who benefit most from the smart home are not those actually living in these homes but service providers, using your data to keep an eye on you. For example, landlords:
Smart home technology is an alluring proposition for the apartment industry: Provide renters with a home that integrates with and responds to their lifestyle, and ­increase rents, save on energy, and collect useful resident population data in return. Kayla Devon
Internet-connected locks and facial recognition systems have raised privacy concerns among tenants across the country. A sales pitch directed at landlords by a smart-home security company indicates that the technology could help them raise rental prices and potentially get people evicted. Alfred Ng
We should pass a law that would hold smart access companies to the highest possible standard while making certain that their technology is safe, secure and reliable for tenants. Michael McKee

Energy companies have been pushing smart meters and other smart technologies, supposedly to help you reduce your energy bills, but also to get involved in other aspects of your life. For example, Constellation promotes the benefits of smart home technology for maintaining the independence of the elderly, while Karen Jorden mentions the possibility of remote surveillance by family members living elsewhere.
Smart technology that recognizes patterns, such as the morning coffee-making routine mentioned earlier, could come in handy when those patterns are broken, perhaps alerting grown children that something may be amiss with an elderly parent. Karen Jordan

As Ashlee Clark Thompson points out, this kind of remote surveillance can benefit the children as well as the parents, providing peace of mind as well as reducing the need for physical visits to check up. 

And doubtless the energy companies have other ideas as well. According to Ross Clark

Scottish and Southern Electricity Networks has proposed a system in which it will be able to turn off certain devices in our homes ... when the supply of electricity is too small to meet demand.

So we keep coming back to the fundamental ethical question: Whom shall the smart home serve?




Dieter Bohn, Situation: there are too many competing smart home standards. Surely a new one will fix it, right? (The Verge, 19 Dec 2019)

Ross Clark, The critics of smart meters were right all along (Telegraph, 19 September 2020) HT @tprstly

Constellation, Smart Homes Allow the Aging to Maintain Independence (published 20 July 2018 updated 13 August 2018)

Kayla Devon, The Lure of the Smart Apartment (MFE, 31 March 2016)

Karen Jordan, Set It And Forget It: The Lure Of Smart Apartments (Forbes, 28 August 2017)

Kieren McCarthy, The IoT wars are over, maybe? Amazon, Apple, Google give up on smart-home domination dreams, agree to develop common standards (The Register, 18 Dec 2019)

Michael McKee, Your Landlord Could Know That You’re Not at Home Right Now (New York Times, 17 December 2019)

Alfred Ng, Smart home tech can help evict renters, surveillance company tells landlords (CNET, 25 October 2019)

Ashlee Clark Thompson, Persuading your older parents to take the smart home leap (CNET, 11 April 2017)

Related posts: Understanding the Value Chain of the Internet of Things (June 2015), Defeating the Device Paradigm (Oct 2015), Hidden Functionality (February 2019), Towards Chatbot Ethics - Whom does the chatbot serve? (May 2019), Driverless cars - Whom does the technology serve? (May 2019), The Road Less Travelled - Whom does the algorithm serve? (June 2019)

 

 Updated 19 September 2020

Friday, October 11, 2019

Insights and Challenges from Mulesoft Connect 2019

#CONNECT19 @MuleSoft is a significant player in the Integration Platform as a Service (iPaaS) market. I've just spent some time at their annual customer event in London.

Over the years, I've been to many events like this. A technology company organizes an event, possibly repeated at several locations around the globe, attended by customers and prospects, business partners, employees and others. After an introduction of loud music and lights shining in your face, the CEO or CTO or honoured guest bounces onto the stage and provides a keynote speech. Outside, there will be exhibition stands with product demonstrations, as well as information about complementary products and services.

At such events, we are presented with an array of messages from the company and its business partners, with endorsements and some useful insights from a handful of customers. So how to analyse and evaluate these messages?

Firstly, what's new. In the case of Mulesoft, the core technology vision of microserves, networked applications and B2B ecosystems has been around for many years. (At the CBDi Forum, we were talking about some of this stuff over ten years ago.) But it's useful to see how far the industry has got towards this vision, and how much further there is to go. In his presentation, Mulesoft CTO Uri Sarid described a complex ecosystem that might exist by around 2025, including demand-side orchestration of services. There is a fair amount of technology for supply-side orchestration of APIs, but demand-side orchestration isn't really there yet.

Furthermore, organizations are often cautious about releasing the APIs into the wild. For example, government departments may make APIs available to other government departments, local governments and the NHS, but in many cases it is not yet possible for citizens to consume these APIs directly, or for a third party (such as a charity) to act as a mediator. However, some sectors have made progress in this direction, thanks to initiatives such as Open Banking.

However, the technology has allowed all sorts of things to be done much faster and more reliably, so I heard some good stories about the speed with which new functionality can be rolled out across multiple endpoints. As some of the technical obstacles are removed, IT people should be able to shift their attention to business transformation, or even imagination and innovation.

MuleSoft argues that the API economy will drive / is driving a rapid cycle of (incremental) innovation, accelerating the pace of change in some ecosystems. MuleSoft is enthusiastic about  citizen integration or democratization, shifting the initiative from the Town Planners to the Settlers and Pioneers. However, if APIs are to serve as reusable building blocks, they need to be built to last. (There is an important connection between ideas of trimodal development and ideas of pace layering, which needs to be teased out further.)


Secondly, what's different. Not just differences between the past and the present, but differences between Mulesoft and other vendors with comparable offerings. At a given point in time, each competing product will provide slightly different sets of features at different price points, but feature comparisons can get outdated very quickly. And if you are acquiring this kind of technology, you would ideally like to know the total cost of ownership, and the productivity you are likely to get. It takes time and money to research such questions properly, and I'm not surprised that so many people rely on versions of the Magic Sorting Hat produced by the large analyst firms.

By the way, this is not just about comparing Mulesoft with other iPaaS vendors, but comparing iPaaS with other technologies, such as Robotic Process Automation (RPA).


And thirdly, what's missing. Although I heard business strategy for APIs mentioned several times, I didn't hear much about how this could be done. Several speakers warned against using the term API for a non-technical audience, and advised people to talk about service benefit.

But how to identify and analyse service benefit? How do you identify the service value that can be delivered through APIs, how do you determine the right level of granularity and asset-specificity, and what are the design principles? In other words, how do you get the business requirements that drive the use of MuleSoft or other iPaaS products? I button-holed a few consultants from the large consultancies, but the answers were mostly disappointing.


I plan to attend some similar events this month, and shall write a couple of general follow-up posts.




Notes

Here's an article I wrote in 2002, which mentioned Sun Microsystems' distinction between micro services and macro services.

Richard Veryard, Identifying Web Services (CBDI Journal, February 2002)

And here are two articles discussing demand-side orchestration.

Richard Veryard and Philip Boxer, Metropolis and SOA Governance (Microsoft Architecture Journal, July 2005)

Philip Boxer and Richard Veryard, Taking Governance to the Edge (Microsoft Architecture Journal, August 2006)

See also CBDI Journal Archive

Link to MuleSoft presentations https://library.mulesoft.com/mulesoft-connect-london-2019/


Related post: Strategy and Requirements for the API Ecosystem (October 2019)

Friday, August 09, 2019

RPA - Real Value or Painful Experimentation?

In May 2017, Fran Karamouzis of Gartner stated that "96% of clients are getting real value from RPA" (Robotic Process Automation). But by October/November 2018, RPA was declared to be at the top of the Gartner "hype cycle", also known as the Peak of Inflated Expectations.

So from a peak of inflated expectations we should not be surprised to see RPA now entering a trough of disillusionment, with surveys showing significant levels of user dissatisfaction. Phil Fersht of HfS explains this in terms that will largely be familiar from previous technological innovations.
  • The over-hyping of how "easy" this is
  • Lack of real experiences being shared publicly
  • Huge translation issues between business and IT
  • Obsession with "numbers of bots deployed" versus quality of outcomes
  • Failure of the "Big iron" ERP vendors and the digital juggernauts to embrace RPA 
"You can't focus on a tools-first approach to anything." adds @jpmorgenthal

There are some generic models and patterns of technology adoption and diffusion that are largely independent of the specific technology in question. When Everett Rogers and his colleagues did the original research on the adoption of new technology by farmers in the 1950s, it made sense to identify a spectrum of attitudes, with "innovators" and "early adopters" at one end, and with "late adopters" or "laggards" at the other end. Clearly some people can be attracted by a plausible story of future potential, while others need to see convincing evidence that an innovation has already succeeded elsewhere.
Diffusion of Innovations (Source: Wikipedia)

Obviously adoption by organizations is a slightly more complicated matter than adoption by individual farmers, but we can find a similar spread of attitudes within a single large organization. There may be some limited funding to carry out early trials of selected technologies (what Fersht describes as "sometimes painful experimentation"), but in the absence of positive results it gets progressively harder to justify continued funding. Opposition from elsewhere in the organization comes not only from people who are generally sceptical about technology adoption, but also from people who wish to direct the available resources towards some even newer and sexier technology. The "pioneers" have moved on to something else, and the "settlers" aren't yet ready to settle. There is a discontinuity in the adoption curve, which Geoffrey Moore calls "crossing the chasm".

Note: The terms "pioneers" and "settlers" refers to the trimodal approach. See my post Beyond Bimodal (May 2016).

But as Fersht indicates, there are some specific challenges for RPA in particular. Although it's supposed to be about process automation, some of the use cases I've seen are simply doing localized application patching, using robots to perform adhoc swivel-chair integration. Not even paving the cow-paths, but paving the workarounds. Tool vendors such as KOFAX recommend specific robotic types for different patching requirements. The problem with this patchwork approach to automation is that while each patch may make sense in isolation, the overall architecture progressively becomes more complicated.

There is a common view of process optimization that suggests you concentrate on fixing the bottlenecks, as if the rest of the process can look after itself, and this view has been adopted by many people in the RPA world. For example Ayshwarya Venkataraman, who describes herself on Linked-In as a technology evangelist, asserts that "process optimization can be easily achieved by automating some tasks in a process".

But fixing a bottleneck in one place often exposes a bottleneck somewhere else. Moreover, complicated workflow solutions may be subject to Braess's paradox, which says that under certain circumstances adding capacity to a network can actually slow it down. So you really need to understand the whole end-to-end process (or system-of-systems).

And there's an ethical point here as well. Human-computer processes need to be designed not only for efficiency and reliability but also for job satisfaction. The robots should be configured to serve the people, not just taking over the easily-automated tasks and leaving the human with a fragmented and incoherent job serving the robots.

And the more bots you've got (the more bot licences you've bought), the challenge shifts from getting each bot to work properly to combining large numbers of bots in a meaningful and coordinated way.  Adding a single robotic patch to an existing process may deliver short-term benefits, but how are users supposed to mobilize and combine hundreds of bots in a coherent and flexible manner, to deliver real lasting enterprise-scale value? Ravi Ramamurthy believes that a rich ecosystem of interoperable robots will enable a proliferation of automation - but we aren't quite there yet.



Phil Fersht, Gartner: 96% of customers are getting real value from RPA? Really? (HfS 23 May 2017), With 44% dissatisfaction, it's time to get real about the struggles of RPA 1.0 (HfS, 31 July 2019)

Geoffrey Moore, Crossing the Chasm (1991)

Susan Moore, Gartner Says Worldwide Robotic Process Automation Software Market Grew 63% in 2018 (Gartner, 24 June 2019)

Ravi Ramamurthy, Is Robotic Automation just a patchwork? (6 December 2015)

Everett Rogers, Diffusion of Innovations (First published 1962, 5th edition 2003)

Daniel Schmidt, 4 Indispensable Types of Robots (and How to Use Them) (KOFAX Blog, 10 April 2018)

Alex Seran, More than Hype: Real Value of Robotic Process Automation (RPA) (Huron, October 2018)

Sony Shetty, Gartner Says Worldwide Spending on Robotic Process Automation Software to Reach $680 Million in 2018 (Gartner, 13 November 2018)

Ayshwarya Venkataraman, How Robotic Process Automation Renounces Swivel Chair Automation with a Digital Workforce (Aspire Systems, 5 June 2018)


Wikipedia: Braess's Paradox, Diffusion of Innovations, Technology Adoption Lifecycle


Related posts: Process Automation and Intelligence (August 2019), Automation Ethics (August 2019)

Thursday, July 18, 2019

Robust Against Manipulation

As algorithms get more sophisticated, so do the opportunities to trick them. An algorithm can be forced or nudged to make incorrect decisions, in order to yield benefits to a (hostile) third party. John Bates, one of the pioneers of Complex Event Processing, has raised fears of algorithmic terrorism, but algorithmic manipulation may also be motivated by commercial interests or simple vandalism.

An extreme example of this could be a road sign that reads STOP to humans but is misread as something else by self-driving cars. Another example might be false signals that are designed to trigger algorithmic trading and thereby nudge markets. Given the increasing reliance on automatic screening machines at airports and elsewhere, there are obvious incentives for smugglers and terrorists to develop ways of fooling these machines - either to get their stuff past the machines, or to generate so many false positives that the machines aren't taken seriously. And of course email spammers are always looking for ways to bypass the spam filters.

"It will also become increasingly important that AI algorithms be robust against manipulation. A machine vision system to scan airline luggage for bombs must be robust against human adversaries deliberately searching for exploitable flaws in the algorithm - for example, a shape that, placed next to a pistol in one's luggage, would neutralize recognition of it. Robustness against manipulation is an ordinary criterion in information security; nearly the criterion. But it is not a criterion that appears often in machine learning journals, which are currently more interested in, e.g., how an algorithm scales upon larger parallel systems." [Bostrom and Yudkowsky]

One kind of manipulation involves the construction of misleading examples (known in the literature as "adversarial examples"). For example, an example that exploits the inaccuracies of a specific image recognition algorithm to produce an image that will be incorrectly classified, thus producing an incorrect action (or suppressing the correct action).

Another kind of manipulation involves poisoning the model - deliberately feeding a machine learning algorithm with biased or bad data, in order to disrupt or skew its behaviour. (Historical analogy: manipulation of pop music charts.)

We have to assume that some bad actors will have access to the latest technologies, and will themselves be using machine learning and other techniques to design these attacks, and this sets up an arms race between the good guys and the bad guys. Is there any way to keep advanced technologies from getting in the wrong hands?

In the security world, people are familiar with the concept of Distributed Denial of Service (DDOS). But perhaps this now becomes Distributed Distortion of Service. Which may be more subtle but no less dangerous.

While there are strong arguments for algorithmic transparency of automated systems, some people may be concerned that transparency will aid such attacks. The argument here is that the more adversaries can discover about the algorithm and its training data, the more opportunities for manipulation. But it would be wrong to conclude that we should keep algorithms safe by keeping them secret ("security through obscurity"). A better conclusion would be that transparency should be a defence against manipulation, by making it easier for stakeholders to detect and counter such attempts.




John Bates, Algorithmic Terrorism (Apama, 4 August 2010). To Catch an Algo Thief (Huffington Post, 26 Feb 2015)

Nick Bostrom and Eliezer Yudkowsky, The Ethics of Artificial Intelligence (2011)

Ian Goodfellow, Patrick McDaniel and Nicolas Papernot, Making Machine Learning Robust Against Adversarial Inputs (Communications of the ACM, Vol. 61 No. 7, July 2018) Pages 56-66. See also video interview with Papernot.

Neil Strauss, Are Pop Charts Manipulated? (New York Times, 25 January 1996)

Wikipedia: Security Through Obscurity

Related posts: The Unexpected Happens (January 2017)

Tuesday, July 16, 2019

Nudge Technology

People are becoming aware of the ways in which AI and big data can be used to influence people, in accordance with Nudge Theory. Individuals can be nudged to behave in particular ways, and large-scale social systems (including elections and markets) can apparently be manipulated. In other posts, I have talked about the general ethics of nudging systems. This post will concentrate on the technological aspects.

Technologically mediated nudges are delivered by a sociotechnical system we could call a Nudge System. This system might contain several different algorithms and other components, and may even have a human-in-the-loop. Our primary concern here is about the system as a whole.

As an example, I am going to consider a digital advertisement in a public place, which shows anti-smoking messages whenever it detects tobacco smoke.

Typically, a nudge system would perform several related activities.

1. There would be some mechanism for "reading" the situation. For example, detecting the events that might trigger a nudge, as well as determining the context. This might be a simple sense-and-respond or it might include some more sophisticated analysis, using some kind of model. There is typically an element of surveillance here. In our example, let us imagine that the system is able to distinguish different brands of cigarette, and determine how many people are smoking in its vicinity.

2. Assuming that there was some variety in the nudges produced by the system, there would be a mechanism for selecting or constructing a specific nudge, using a set of predefined nudges or nudge templates. For example, different anti-smoking messages for the expensive brands versus the cheap brands. Combined with other personal data, the system might even be able to name and shame the smokers.

3. There would then be a mechanism for delivering or otherwise executing the nudge. For example private (to a person's phone) or public (via a display board). We might call this the nudge agent. In some cases, the nudge may be delivered by a human, but prompted by an intelligent system. If the nudge is publicly visible, this could allow other people to infer the circumstances leading to the nudge - therefore a potential breach of privacy. (For example, letting your friends and family know that you were having a sneaky cigarette, when you had told them that you had given up.)

4. In some cases, there might be a direct feedback loop, giving the system immediate data about the human response to the nudge. Obviously this will not always be possible. Nevertheless, we would expect the system to retain a record of the delivered nudges, for future analysis. To support multiple feedback tempos (as discussed in my work on Organizational Intelligence) there could be multiple components performing the feedback and learning function. Typically, the faster loops would be completely automated (autonomic) while the slower loops would have some human interaction.

There would typically be algorithms to support each of these activities, possibly based on some form of Machine Learning, and there is the potential for algorithmic bias at several points in the design of the system, as well as various forms of inaccuracy (for example false positives, where the system incorrectly detects tobacco smoke). More information doesn't always mean better information - for example, someone might design a sensor that would estimate the height of the smoker, in order to detect underage smokers - but this obviously introduces new possibilities of error.

In many cases, there will be a separation between the technology engineers who build systems and components, and the social engineers who use these systems and components to produce some commercial or social effects. This raises two different ethical questions.

Firstly, what does responsible use of nudge technology look like - in other words, what are the acceptable ways that nudge technology can be deployed. What purposes, what kind of content, the need for testing and continuous monitoring to detect any signals of harm or bias, and so on. Should the nudge be private to the nudgee, or could it be visible to others? What technical and organizational controls should be in place before the nudge technology is switched on?

And secondly, what does responsible nudge technology look like - in other words, one that can be used safely and reliably, with reasonable levels of transparency and user control.

We may note that nudge technologies can be exploited by third parties with a commercial or political intent. For example, there are constant attempts to trick or subvert the search and recommendation algorithms used by the large platforms, and Alex Hern recently reported on Google's ongoing battle to combat misinformation and promotion of extreme content. So one of the requirements of responsible nudge technology is being Robust Against Manipulation.

We may also note that if there is any bias anywhere, this may either be inherent in the design of the nudge technology itself, or may be introduced by the users of the nudge technology when customizing it for a specific purpose. For example, nudges may be deliberately worded as "dog whistles" - designed to have a strong effect on some subjects while being ignored by others - and this can produce significant and possibly unethical bias in the working of the system. But this bias is not in the algorithm but in the nudge templates, and there may be other ways in which bias is relevant to nudging in general, so the question of algorithmic bias is not the whole story.



Alex Hern, Google tweaked algorithm after rise in US shootings (Guardian, 2 July 2019)

Wikipedia: Nudge Theory

Related posts: Organizational Intelligence in the Control Room (October 2010), On the Ethics of Technologically Mediated Nudge (May 2019), The Nudge as a Speech Act (May 2019), Algorithms and Governmentality (July 2019)


Wednesday, June 26, 2019

Listening for Trouble

Many US schools and hospitals have installed Aggression Detection microphones that claim to detect sounds of aggression, thus allowing staff or security personnel to intervene to prevent violence. Sound Intelligence, the company selling the system, claims that the detector has helped to reduce aggressive incidents. What are the ethical implications of such systems?

ProPublica recently tested one such system, enrolling some students to produce a range of sounds that might or might not trigger the alarm. They also talked to some of the organizations using it, including a hospital in New Jersey that has now decommissioned the system, following a trial that (among other things) failed to detect a seriously agitated patient. ProPublica's conclusion was that the system was "less than reliable".

Sound Intelligence is a Dutch company, which has been fitting microphones into street cameras for over ten years, in the Netherlands and elsewhere in Europe. This was approved by the Dutch Data Protection Regulator on the argument that the cameras are only switched on after someone screams, so the privacy risk is reduced.

But Dutch cities can be pretty quiet. As one of the developers admitted to the New Yorker in 2008, "We don’t have enough aggression to train the system properly". Many experts have questioned the validity of installing the system in an entirely different environment, and Sound Intelligence refused to reveal the source of the training data, including whether the data had been collected in schools.

In theory, a genuine scream can be identified by a sound pattern that indicates a partial loss of control of the vocal chords, although the accurate detection of this difference can be compromised by audio distortion (known as clipping). When people scream on demand, they protect their vocal chords and do not produce the same sound. (Actors are taught to simulate screams, but the technology can supposedly tell the difference.) So it probably matters whether the system is trained and tested using real screams or fake ones. (Of course, one might have difficulty persuading an ethics committee to approve the systematic production and collection of real screams.)

Can any harm can be caused by such technologies? Apart from the fact that schools may be wasting money on stuff that doesn't actually work, there is a fairly diffuse harm of unnecessary surveillance. There may also be opportunities for the technologies to be used as a tool for harming someone - for example, by playing a doctored version of a student's voice in order to get that student into trouble. Or if the security guard is a bit trigger-happy, killed.

Technologies like this can often be gamed. For example, a student or ex-student planning an act of violence would be aware of the system and would have had ample opportunity to test what sounds it did or didn't respond to.

Obviously no technology is completely risk-free. If a technology provides genuine benefits in terms of protecting people from real threats, then this may outweigh any negative side-effects. But if the benefits are unproven or imaginary, as ProPublica suggests, this is a more difficult equation.

ProPublica quoted a school principal from a quiet leafy suburb, who justified the system as providing "a bit of extra peace of mind". This could be interpreted as a desire to reassure parents with a false sense of security. Which might be justifiable if it allowed children and teachers to concentrate on schoolwork rather than worrying unnecessarily about unlikely scenarios, or pushing for more extreme measures such as arming the teachers. (But there is always an ethical question mark over security theatre of this kind.)

But let's go back to the nightmare scenario that the system is supposed to protect against. If a school or hospital equipped with this system were to experience a mass shooting incident, and the system failed to detect the incident quickly enough (which on the ProPublica evidence seems quite likely), the incident investigators might want to look at sound recordings from the system. Fortunately, these microphones "allow administrators to record, replay and store those snippets of conversation indefinitely". So that's alright then.

In addition to publishing its findings, ProPublica also published the methodology used for testing and analysis. The first point to note is that this was done with the active collaboration from the supplier. It seems they were provided with good technical information, including the internal architecture of the device and the exact specification of the microphone used. They were able to obtain an exactly equivalent microphone, and could rewire the device and intercept the signals. They discarded samples that had been subject to clipping.

The effectiveness of any independent testing and evaluation is clearly affected by the degree of transparency of the solution, and the degree of cooperation and support provided by the supplier and the users. So this case study has implications, not only for the testing of devices, but also for transparency and system access.




Jack Gillum and Jeff Kao, Aggression Detectors: The Unproven, Invasive Surveillance Technology Schools Are Using to Monitor Students (ProPublica, 25 June 2019)

Jeff Kao and Jack Gillum, Methodology: How We Tested an Aggression Detection Algorithm (ProPublica, 25 June 2019)

John Seabrook, Hello, Hal (New Yorker, 16 June 2008)

P.W.J. van Hengel and T.C. Andringa, Verbal aggression detection in complex social environments (IEEE Conference on Advanced Video and Signal Based Surveillance, 2007)

Groningen makes “listening cameras" permanent (Statewatch, Vol 16 no 5/6, August-December 2006)

Wikipedia: Clipping (Audio)

Related posts: Affective Computing (March 2019), False Sense of Security (June 2019)


Updated 28 June 2019. Thanks to Peter Sandman for pointing out a lack of clarity in the previous version.

Saturday, May 11, 2019

Whom does the technology serve?

When regular hyperbole isn't sufficient, writers often refer to new technologies as The Holy Grail of something or other. As I pointed out in my post on Chatbot Ethics, this has some important ethical implications.

Because in the mediaeval Parsifal legend, at a key moment in the story, our hero fails to ask the critical question: Whom Does the Grail Serve? And when technologists enthuse about the latest inventions, they typically overlook the same question: Whom Does the Technology Serve?

In a new article on driverless cars, Dr Ashley Nunes of MIT, argues that academics have allowed themselves to be distracted by versions of the Trolley Problem (Whom Shall the Vehicle Kill?), and neglected some much more important ethical questions.

For one thing, Nunes argues that the so-called autonomous vehicles are never going to be fully autonomous. There will always be ways of controlling cars remotely, so the idea of a lone robot struggling with some ethical dilemma is just philosophical science fiction. Last year, he told Jesse Dunietz that he hasn't yet found a safety-critical transport system without real-time human oversight.

And in any case, road safety is never about one car at a time, it is about deconfliction - which means cars avoiding each other as well as pedestrians. With human driving, there are multiple deconfliction mechanisms to allow many vehicles to occupy the same space without hitting each other. These include traffic signals, road markings and other conventions indicating right of way, signals (including honking and flashing lights) to negotiate between drivers, or for drivers to show that they are willing to wait for a pedestrian to cross the road in front of them. Equivalent mechanisms will be required to enable so-called autonomous vehicles to provide a degree of transparency of intention, and therefore trust. (See Matthews et al. See also Applin and Fischer). See my post on the Ethics of Interoperability.


But according to Nunes, "the most important question that we should be asking about this technology" is "Who stands to gain from its life-saving potential?" Because "if those who most need it don’t have access, whose lives would we actually be saving?"

In other words, Whom Does The Grail Serve?




Sally Applin and Michael Fischer, Applied Agency: Resolving Multiplexed Communication in Automobiles (Adjunct Proceedings of the 4th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI '12), October 17–19, 2012, Portsmouth, NH, USA) HT @AnthroPunk

Rachel Coldicutt, Tech ethics, who are they good for? (8 June 2018)

Jesse Dunietz, Despite Advances in Self-Driving Technology, Full Automation Remains Elusive (Undark, 22 November 2018) HT @SafeSelfDrive

Ashley Nunes, Driverless cars: researchers have made a wrong turn (Nature Briefing, 8 May 2019) HT @vdignum @HumanDriving

Milecia Matthews, Girish Chowdhary and Emily Kieson, Intent Communication between Autonomous Vehicles and Pedestrians (2017)

Wikipedia: Trolley Problem


Related posts

For Whom (November 2006), Towards Chatbot Ethics - Whom Does the Chatbot Serve?  (May 2019), Ethics of Interoperability (May 2019), The Road Less Travelled - Whom Does the Algorithm Serve? (June 2019), Jaywalking (November 2019)