Many US schools and hospitals have installed Aggression Detection microphones that claim to detect sounds of aggression, thus allowing staff or security personnel to intervene to prevent violence. Sound Intelligence, the company selling the system, claims that the detector has helped to reduce aggressive incidents. What are the ethical implications of such systems?
ProPublica recently tested one such system, enrolling some students to produce a range of sounds that might or might not trigger the alarm. They also talked to some of the organizations using it, including a hospital in New Jersey that has now decommissioned the
system, following a trial that (among other things) failed to detect a
seriously agitated patient. ProPublica's conclusion was that the system was "less than reliable".
Sound Intelligence is a Dutch company, which has been fitting microphones into street cameras for over ten years, in the Netherlands and elsewhere in Europe. This was approved by the Dutch Data Protection Regulator on the argument that the cameras are only switched on after someone screams, so the privacy risk is reduced.
But Dutch cities can be pretty quiet. As one of the developers admitted to the New Yorker in 2008, "We don’t have enough aggression to train the system properly". Many experts have questioned the validity of installing the system in an entirely different environment, and Sound Intelligence refused to reveal the source of the training data, including whether the data had been collected in schools.
In theory, a genuine scream can be identified by a sound pattern that indicates a partial loss of control of the vocal chords, although the accurate detection of this difference can be compromised by audio distortion (known as clipping). When people scream on demand, they protect their vocal chords and do not produce the same sound. (Actors are taught to simulate screams, but the technology can supposedly tell the difference.) So it probably matters whether the system is trained and tested using real screams or fake ones. (Of course, one might have difficulty persuading an ethics committee to approve the systematic production and collection of real screams.)
Can any harm can be caused by such technologies? Apart from the fact that schools may be wasting money on stuff that doesn't actually work, there is a fairly diffuse harm of unnecessary surveillance. Students may learn to suppress all varieties of loud noises, including sounds of celebration and joy. There may also be opportunities for the technologies to be used as a tool for harming someone - for example, by playing a doctored version of a student's voice in order to get that student into trouble. Or if the security guard is a bit trigger-happy, killed.
Technologies like this can often be gamed. For example, a student or ex-student planning an act of violence would be aware of the system and would have had ample opportunity to test what sounds it did or didn't respond to.
Obviously no technology is completely risk-free. If a technology provides genuine benefits in terms of protecting people from real threats, then this may outweigh any negative side-effects. But if the benefits are unproven or imaginary, as ProPublica suggests, this is a more difficult equation.
ProPublica quoted a school principal from a quiet leafy suburb, who justified the system as providing "a bit of extra peace of mind". This could be interpreted as a desire to reassure parents with a false sense of security. Which might be justifiable if it allowed children and teachers to concentrate on schoolwork rather than worrying unnecessarily about unlikely scenarios, or pushing for more extreme measures such as arming the teachers. (But there is always an ethical question mark over security theatre of this kind.)
But let's go back to the nightmare scenario that the system is supposed to protect against. If a school or hospital equipped with this system were to experience a mass shooting incident, and the system failed to detect the incident quickly
enough (which on the ProPublica evidence seems quite likely), the
incident investigators might want to look at sound recordings from the
system. Fortunately, these microphones "allow administrators to record,
replay and store those snippets of conversation indefinitely". So that's
alright then.
In addition to publishing its findings, ProPublica also published the methodology used for testing and analysis. The first point to note is that this was done with the active collaboration from the supplier. It seems they were provided with good technical information, including the internal architecture of the device and the exact specification of the microphone used. They were able to obtain an exactly equivalent microphone, and could rewire the device and intercept the signals. They discarded samples that had been subject to clipping.
The effectiveness of any independent testing and evaluation is clearly affected by the degree of transparency of the solution, and the degree of cooperation and support provided by the supplier and the users. So this case study has implications, not only for the testing of devices, but also for transparency and system access.
Jack Gillum and Jeff Kao, Aggression Detectors: The Unproven, Invasive Surveillance Technology Schools Are Using to Monitor Students (ProPublica, 25 June 2019)
Jeff Kao and Jack Gillum, Methodology: How We Tested an Aggression Detection Algorithm (ProPublica, 25 June 2019)
John Seabrook, Hello, Hal (New Yorker, 16 June 2008)
P.W.J. van Hengel and T.C. Andringa, Verbal aggression detection in complex social environments (IEEE Conference on Advanced Video and Signal Based Surveillance, 2007)
Groningen makes “listening cameras" permanent (Statewatch, Vol 16 no 5/6, August-December 2006)
Wikipedia: Clipping (Audio)
Related posts: Affective Computing (March 2019), False Sense of Security (June 2019)
Updated 28 June 2019. Thanks to Peter Sandman for pointing out a lack of clarity in the previous version.
No comments:
Post a Comment