Tuesday, April 27, 2021

The Allure of the Smart City

The concept of smart city seems to encompass a broad range of sociotechnical initiatives, including community-based healthcare, digital electricity, housing affordability and sustainability, next-generation infrastructure, noise pollution, quality of air and water, robotic furniture, transport and mobility, and urban planning. The smart city is not a technology as such, more like an assemblage of technologies.

Within this mix, there is often a sincere attempt to address some serious social and environmental concerns, such as reducing the city's carbon footprint. However, Professor Rob Kitchen notes a tendency towards greenwashing or even ethics washing.

Kitchen also raises concerns about civic paternalism - city authorities and their tech partners knowing what's best for the citizenry.

On the other hand, John Koetsier makes the point that If-We-Don't-Do-It-The-Chinese-Will. This point was also recently made by Jeremy Fleming in his 2021 Vincent Briscoe lecture. (See my post on the Invisibility of Infrastructure.)

Meanwhile, here is a small and possibly unrepresentative sample of Smart City initiatives in the West that have reached the press recently.

  • Madrid with IBM
  • Portland with Google Sidewalk - cancelled Feb 2021
  • San Jose with Intel - pilot programme
  • Toronto with Google Sidewalk (Quayside) - cancelled May 2020



Daniel Doctoroff, Why we’re no longer pursuing the Quayside project — and what’s next for Sidewalk Labs (Sidewalk Talk, 7 May 2020)

Rob Kitchen, The Ethics of Smart Cities (RTE 27 April 2019)

John Koetsier, 9 Things We Lost When Google Canceled Its Smart Cities Project In Toronto (Forbes, 13 May 2020) 

Ryan Mark and Gregory Anya, Ethics of Using Smart City AI and Big Data: The Case of Four Large European Cities (Orbit, Vol 2/2, 2019)

Juan Pedro Tomás, Smart city case study: San Jose, California (Enterprise IOT Insights, 5 October 2017)

Jane Wakefield, The Google city that has angered Toronto (BBC News, 18 May 2019), Google-linked smart city plan ditched in Portland (BBC News, 23 February 2021)


See also IOT is coming to town (December 2017), On the invisibility of infrastructure (April 2021)


Wednesday, February 03, 2021

Andy Jassy

Most people still think of Amazon primarily as an online retailer, but the elevation of Andy Jassy to take over from Jeff Bezos as CEO provides further evidence for the strategic importance of Amazon Web Services (AWS) within the Amazon group.

AWS was launched in 2002 and relaunched in 2006. In March 2008, Om Malik published an interview with Ray Ozzie, then the Chief Software Architect at Microsoft, which included some positive comments about AWS. By the end of the year, both Google and Microsoft had announced rival cloud computing offerings. As far as I can see, cloud computing first appeared as an Emerging Technology on the Gartner Hype Curve (it's not a cycle) in 2008, reaching the Peak of Inflated Expectations by 2009.

During that period, I was a software industry analyst, calling out Jeff Bezos and Ray Ozzie as two of the most visionary players in the industry. My colleague Lawrence Wilkes wrote a long report on AWS in 2004. (But the hype around cloud computing took off later, and the broader awareness of AWS is comparatively recent, so I'm not convinced that the classic hype curve applies to this topic.)

Alongside the news of Jassy's elevation, today's tech press also reports that Google Cloud is still making massive losses. So much for the Slope of Enlightenment then.


Jasper Jolly, Bezos leaves Amazon in its prime – keeping it that way is the task (The Guardian, 3 February 2021)

Kieren McCarthy, So Jeff Bezos is stepping back from Amazon to play with his space rockets. Who's this Andy Jassy chap? (The Register, 3 February 2021)

Om Malik, GigaOM Interview: Ray Ozzie (GigaOM, 10 March 2008)

Ron Miller, What Andy Jassy’s promotion to Amazon CEO could mean for AWS (TechCrunch, 2 February 2021)

Simon Sharwood, Google's cloud services lost $14.6bn over three years – and CEO Sundar Pichai likes that trajectory (The Register, 3 February 2021)

Lawrence Wilkes, Amazon and eBay Web Services - The new enterprise applications? (CBDI Journal, October 2004) 

Related posts: Jeff Bezos and Ecosystem Thinking (February 2004), Amazon and eBay (August 2004), Internet Service Disruption (November 2005), Ray Ozzie (March 2008), Utility Computing and Profitability (March 2008)

Also Technology Hype Curve (September 2005)

Sunday, December 29, 2019

The Allure of the Smart Home

What exactly is a smart home, and why would I want to live in one?

I don't think the smart home concept is just about having the latest cool technology or containing some smart stuff. And many of the most commonly discussed examples of smart technology in the home seem to be merely modest improvements on earlier technologies, rather than something entirely new.

Let's look at some smart devices you might have in your home. Programmable thermostats have been available for ages, adjusting heating and/or air conditioning to maintain a comfortable temperature at certain times of day. Modern heating systems can now offer separate controls for each room, and be programmed to reduce your total energy consumption: such systems are typically marketed as intelligent systems. So whatever smart technology is doing in this area looks more like useful improvement than radical change.

Or how about remote control functionality. Remote control devices have been around for a long time, especially for couch potatoes who wished to change TV channels without the effort of walking a few feet across the room. Now we have voice-activated controls, for people who can't even be bothered to search under the cushions for the remote control device. Voice activation may be a bit more technologically sophisticated than pushing buttons, and some artificial intelligence may be required to recognize and interpret the voice commands, but it's basically the same need that is being satisfied here.

Or how about a chatbot to answer your questions? In most cases, the answers aren't hard-wired into the device, but are pulled from some source outside the home. So the chatbot is merely a communication device, as if you had a telephone hotline to Stephen Fry only faster and always available, like several million Stephen Fry clones working in parallel around the clock. (You may choose any other  knowledgeable and witty celebrity if you prefer.)

And the idea that having a chatbot device in your home makes the home itself smart is like thinking that having a smartphone in the pocket of your trousers turns them into smart trousers. Or that having Stephen Fry's phone number attached to your fridge door turns it into a smart fridge.

Of course, a smart system may have multiple components - different classes of device. You might install an intelligent security system, using cameras and other devices, to recognize and admit your children and pets, while keeping the home safe from unwanted visitors.

But surely the concept of smart home means more than just having a number of smart parts or subsystems, it implies that the home itself manifests some intelligence at the whole-system level. The primary requirement seems to be that these smart devices are connected, not to the world outside the home, but to each other, enabling them to orchestrate things. Not just home automation, but seamless home automation.

For example, suppose I make my home responsible for getting me to work on time. My home computer could monitor the traffic reports or disruption on public transport, check with my car whether I needed to allow extra time to refuel, send a message to my alarm clock to wake me up at the optimal time, having also instructed the heating system when to switch the boiler on.

Assuming I do not wish my movements to be known in advance by burglars and kidnappers, all of these messages need to be secure against eavesdropping. It isn't obvious to me why it would be necessary to transmit these messages via servers outside my house. Yes I know it's called the internet of things, but does that mean everything has to go via the internet?

Well yes apparently it does, if we follow the recently announced Connected Home over IP (CHIP) standards, to be developed jointly by Amazon, Apple, Google, and most of the other key players in the smart home market.

Many of those who commented on the Register article raised concerns about encryption. It seems unlikely at this point that the tech giants will be keen on end-end encryption, because surely they are going to want to feed your data into the hungry mouths of their machine learning starlings. So whatever security measures are included in the CHIP standards, they will probably represent a compromise, appearing to take security seriously but not seriously impeding the commercial and strategic interests of the vendors. Smart for them, not necessarily for us.

Sometimes it seems that the people who benefit most from the smart home are not those actually living in these homes but service providers, using your data to keep an eye on you. For example, landlords:
Smart home technology is an alluring proposition for the apartment industry: Provide renters with a home that integrates with and responds to their lifestyle, and ­increase rents, save on energy, and collect useful resident population data in return. Kayla Devon
Internet-connected locks and facial recognition systems have raised privacy concerns among tenants across the country. A sales pitch directed at landlords by a smart-home security company indicates that the technology could help them raise rental prices and potentially get people evicted. Alfred Ng
We should pass a law that would hold smart access companies to the highest possible standard while making certain that their technology is safe, secure and reliable for tenants. Michael McKee

Energy companies have been pushing smart meters and other smart technologies, supposedly to help you reduce your energy bills, but also to get involved in other aspects of your life. For example, Constellation promotes the benefits of smart home technology for maintaining the independence of the elderly, while Karen Jorden mentions the possibility of remote surveillance by family members living elsewhere.
Smart technology that recognizes patterns, such as the morning coffee-making routine mentioned earlier, could come in handy when those patterns are broken, perhaps alerting grown children that something may be amiss with an elderly parent. Karen Jordan

As Ashlee Clark Thompson points out, this kind of remote surveillance can benefit the children as well as the parents, providing peace of mind as well as reducing the need for physical visits to check up. 

And doubtless the energy companies have other ideas as well. According to Ross Clark

Scottish and Southern Electricity Networks has proposed a system in which it will be able to turn off certain devices in our homes ... when the supply of electricity is too small to meet demand.

Finally, Ian Dunt grumbled that his smart thermostat was like having a secret flatmate.

and got dozens of Tweets in reply, from people with similar frustrations.

So we keep coming back to the fundamental ethical question: Whom shall the smart home serve?

Footnote May 2021

Some legal advice for landlords just in from US law firm Orrick: "Tenant data may be an attractive source of new revenue, but landlords should proceed with caution" (13 May 2021). They also note that "New York City Council has enacted a Tenant Data Privacy Act that is poised to enhance privacy protections in multifamily buildings in the city" (27 May 2021).

Dieter Bohn, Situation: there are too many competing smart home standards. Surely a new one will fix it, right? (The Verge, 19 Dec 2019)

Ross Clark, The critics of smart meters were right all along (Telegraph, 19 September 2020) HT @tprstly

Constellation, Smart Homes Allow the Aging to Maintain Independence (published 20 July 2018 updated 13 August 2018)

Kayla Devon, The Lure of the Smart Apartment (MFE, 31 March 2016)

Karen Jordan, Set It And Forget It: The Lure Of Smart Apartments (Forbes, 28 August 2017)

Kieren McCarthy, The IoT wars are over, maybe? Amazon, Apple, Google give up on smart-home domination dreams, agree to develop common standards (The Register, 18 Dec 2019)

Michael McKee, Your Landlord Could Know That You’re Not at Home Right Now (New York Times, 17 December 2019)

Alfred Ng, Smart home tech can help evict renters, surveillance company tells landlords (CNET, 25 October 2019)

Ashlee Clark Thompson, Persuading your older parents to take the smart home leap (CNET, 11 April 2017)

Shannon Yavorsky and David Curtis, Unlocking the Value of Tenant Data (Orrick 13 May 2021), Home Alone? New York City Enacts Tenant Data Privacy Act ( Orrick 27 May 2021) HT @christinayiotis

Related posts: Understanding the Value Chain of the Internet of Things (June 2015), Defeating the Device Paradigm (Oct 2015), Hidden Functionality (February 2019), Towards Chatbot Ethics - Whom does the chatbot serve? (May 2019), Driverless cars - Whom does the technology serve? (May 2019), The Road Less Travelled - Whom does the algorithm serve? (June 2019)


 Updated 16 November 2020, 29 May 2021

Friday, October 11, 2019

Insights and Challenges from Mulesoft Connect 2019

#CONNECT19 @MuleSoft is a significant player in the Integration Platform as a Service (iPaaS) market. I've just spent some time at their annual customer event in London.

Over the years, I've been to many events like this. A technology company organizes an event, possibly repeated at several locations around the globe, attended by customers and prospects, business partners, employees and others. After an introduction of loud music and lights shining in your face, the CEO or CTO or honoured guest bounces onto the stage and provides a keynote speech. Outside, there will be exhibition stands with product demonstrations, as well as information about complementary products and services.

At such events, we are presented with an array of messages from the company and its business partners, with endorsements and some useful insights from a handful of customers. So how to analyse and evaluate these messages?

Firstly, what's new. In the case of Mulesoft, the core technology vision of microserves, networked applications and B2B ecosystems has been around for many years. (At the CBDi Forum, we were talking about some of this stuff over ten years ago.) But it's useful to see how far the industry has got towards this vision, and how much further there is to go. In his presentation, Mulesoft CTO Uri Sarid described a complex ecosystem that might exist by around 2025, including demand-side orchestration of services. There is a fair amount of technology for supply-side orchestration of APIs, but demand-side orchestration isn't really there yet.

Furthermore, organizations are often cautious about releasing the APIs into the wild. For example, government departments may make APIs available to other government departments, local governments and the NHS, but in many cases it is not yet possible for citizens to consume these APIs directly, or for a third party (such as a charity) to act as a mediator. However, some sectors have made progress in this direction, thanks to initiatives such as Open Banking.

However, the technology has allowed all sorts of things to be done much faster and more reliably, so I heard some good stories about the speed with which new functionality can be rolled out across multiple endpoints. As some of the technical obstacles are removed, IT people should be able to shift their attention to business transformation, or even imagination and innovation.

MuleSoft argues that the API economy will drive / is driving a rapid cycle of (incremental) innovation, accelerating the pace of change in some ecosystems. MuleSoft is enthusiastic about  citizen integration or democratization, shifting the initiative from the Town Planners to the Settlers and Pioneers. However, if APIs are to serve as reusable building blocks, they need to be built to last. (There is an important connection between ideas of trimodal development and ideas of pace layering, which needs to be teased out further.)

Secondly, what's different. Not just differences between the past and the present, but differences between Mulesoft and other vendors with comparable offerings. At a given point in time, each competing product will provide slightly different sets of features at different price points, but feature comparisons can get outdated very quickly. And if you are acquiring this kind of technology, you would ideally like to know the total cost of ownership, and the productivity you are likely to get. It takes time and money to research such questions properly, and I'm not surprised that so many people rely on versions of the Magic Sorting Hat produced by the large analyst firms.

By the way, this is not just about comparing Mulesoft with other iPaaS vendors, but comparing iPaaS with other technologies, such as Robotic Process Automation (RPA).

And thirdly, what's missing. Although I heard business strategy for APIs mentioned several times, I didn't hear much about how this could be done. Several speakers warned against using the term API for a non-technical audience, and advised people to talk about service benefit.

But how to identify and analyse service benefit? How do you identify the service value that can be delivered through APIs, how do you determine the right level of granularity and asset-specificity, and what are the design principles? In other words, how do you get the business requirements that drive the use of MuleSoft or other iPaaS products? I button-holed a few consultants from the large consultancies, but the answers were mostly disappointing.

I plan to attend some similar events this month, and shall write a couple of general follow-up posts.


Here's an article I wrote in 2002, which mentioned Sun Microsystems' distinction between micro services and macro services.

Richard Veryard, Identifying Web Services (CBDI Journal, February 2002)

And here are two articles discussing demand-side orchestration.

Richard Veryard and Philip Boxer, Metropolis and SOA Governance (Microsoft Architecture Journal, July 2005)

Philip Boxer and Richard Veryard, Taking Governance to the Edge (Microsoft Architecture Journal, August 2006)

See also CBDI Journal Archive

Link to MuleSoft presentations https://library.mulesoft.com/mulesoft-connect-london-2019/

Related post: Strategy and Requirements for the API Ecosystem (October 2019)

Friday, August 09, 2019

RPA - Real Value or Painful Experimentation?

In May 2017, Fran Karamouzis of Gartner stated that "96% of clients are getting real value from RPA" (Robotic Process Automation). But by October/November 2018, RPA was declared to be at the top of the Gartner "hype cycle", also known as the Peak of Inflated Expectations.

So from a peak of inflated expectations we should not be surprised to see RPA now entering a trough of disillusionment, with surveys showing significant levels of user dissatisfaction. Phil Fersht of HfS explains this in terms that will largely be familiar from previous technological innovations.
  • The over-hyping of how "easy" this is
  • Lack of real experiences being shared publicly
  • Huge translation issues between business and IT
  • Obsession with "numbers of bots deployed" versus quality of outcomes
  • Failure of the "Big iron" ERP vendors and the digital juggernauts to embrace RPA 
"You can't focus on a tools-first approach to anything." adds @jpmorgenthal

There are some generic models and patterns of technology adoption and diffusion that are largely independent of the specific technology in question. When Everett Rogers and his colleagues did the original research on the adoption of new technology by farmers in the 1950s, it made sense to identify a spectrum of attitudes, with "innovators" and "early adopters" at one end, and with "late adopters" or "laggards" at the other end. Clearly some people can be attracted by a plausible story of future potential, while others need to see convincing evidence that an innovation has already succeeded elsewhere.
Diffusion of Innovations (Source: Wikipedia)

Obviously adoption by organizations is a slightly more complicated matter than adoption by individual farmers, but we can find a similar spread of attitudes within a single large organization. There may be some limited funding to carry out early trials of selected technologies (what Fersht describes as "sometimes painful experimentation"), but in the absence of positive results it gets progressively harder to justify continued funding. Opposition from elsewhere in the organization comes not only from people who are generally sceptical about technology adoption, but also from people who wish to direct the available resources towards some even newer and sexier technology. The "pioneers" have moved on to something else, and the "settlers" aren't yet ready to settle. There is a discontinuity in the adoption curve, which Geoffrey Moore calls "crossing the chasm".

Note: The terms "pioneers" and "settlers" refers to the trimodal approach. See my post Beyond Bimodal (May 2016).

But as Fersht indicates, there are some specific challenges for RPA in particular. Although it's supposed to be about process automation, some of the use cases I've seen are simply doing localized application patching, using robots to perform adhoc swivel-chair integration. Not even paving the cow-paths, but paving the workarounds. Tool vendors such as KOFAX recommend specific robotic types for different patching requirements. The problem with this patchwork approach to automation is that while each patch may make sense in isolation, the overall architecture progressively becomes more complicated.

There is a common view of process optimization that suggests you concentrate on fixing the bottlenecks, as if the rest of the process can look after itself, and this view has been adopted by many people in the RPA world. For example Ayshwarya Venkataraman, who describes herself on Linked-In as a technology evangelist, asserts that "process optimization can be easily achieved by automating some tasks in a process".

But fixing a bottleneck in one place often exposes a bottleneck somewhere else. Moreover, complicated workflow solutions may be subject to Braess's paradox, which says that under certain circumstances adding capacity to a network can actually slow it down. So you really need to understand the whole end-to-end process (or system-of-systems).

And there's an ethical point here as well. Human-computer processes need to be designed not only for efficiency and reliability but also for job satisfaction. The robots should be configured to serve the people, not just taking over the easily-automated tasks and leaving the human with a fragmented and incoherent job serving the robots.

And the more bots you've got (the more bot licences you've bought), the challenge shifts from getting each bot to work properly to combining large numbers of bots in a meaningful and coordinated way.  Adding a single robotic patch to an existing process may deliver short-term benefits, but how are users supposed to mobilize and combine hundreds of bots in a coherent and flexible manner, to deliver real lasting enterprise-scale value? Ravi Ramamurthy believes that a rich ecosystem of interoperable robots will enable a proliferation of automation - but we aren't quite there yet.

Phil Fersht, Gartner: 96% of customers are getting real value from RPA? Really? (HfS 23 May 2017), With 44% dissatisfaction, it's time to get real about the struggles of RPA 1.0 (HfS, 31 July 2019)

Geoffrey Moore, Crossing the Chasm (1991)

Susan Moore, Gartner Says Worldwide Robotic Process Automation Software Market Grew 63% in 2018 (Gartner, 24 June 2019)

Ravi Ramamurthy, Is Robotic Automation just a patchwork? (6 December 2015)

Everett Rogers, Diffusion of Innovations (First published 1962, 5th edition 2003)

Daniel Schmidt, 4 Indispensable Types of Robots (and How to Use Them) (KOFAX Blog, 10 April 2018)

Alex Seran, More than Hype: Real Value of Robotic Process Automation (RPA) (Huron, October 2018)

Sony Shetty, Gartner Says Worldwide Spending on Robotic Process Automation Software to Reach $680 Million in 2018 (Gartner, 13 November 2018)

Ayshwarya Venkataraman, How Robotic Process Automation Renounces Swivel Chair Automation with a Digital Workforce (Aspire Systems, 5 June 2018)

Wikipedia: Braess's Paradox, Diffusion of Innovations, Technology Adoption Lifecycle

Related posts: Process Automation and Intelligence (August 2019), Automation Ethics (August 2019)

Thursday, July 18, 2019

Robust Against Manipulation

As algorithms get more sophisticated, so do the opportunities to trick them. An algorithm can be forced or nudged to make incorrect decisions, in order to yield benefits to a (hostile) third party. John Bates, one of the pioneers of Complex Event Processing, has raised fears of algorithmic terrorism, but algorithmic manipulation may also be motivated by commercial interests or simple vandalism.

An extreme example of this could be a road sign that reads STOP to humans but is misread as something else by self-driving cars. Another example might be false signals that are designed to trigger algorithmic trading and thereby nudge markets. Given the increasing reliance on automatic screening machines at airports and elsewhere, there are obvious incentives for smugglers and terrorists to develop ways of fooling these machines - either to get their stuff past the machines, or to generate so many false positives that the machines aren't taken seriously. And of course email spammers are always looking for ways to bypass the spam filters.

"It will also become increasingly important that AI algorithms be robust against manipulation. A machine vision system to scan airline luggage for bombs must be robust against human adversaries deliberately searching for exploitable flaws in the algorithm - for example, a shape that, placed next to a pistol in one's luggage, would neutralize recognition of it. Robustness against manipulation is an ordinary criterion in information security; nearly the criterion. But it is not a criterion that appears often in machine learning journals, which are currently more interested in, e.g., how an algorithm scales upon larger parallel systems." [Bostrom and Yudkowsky]

One kind of manipulation involves the construction of misleading examples (known in the literature as "adversarial examples"). For example, an example that exploits the inaccuracies of a specific image recognition algorithm to produce an image that will be incorrectly classified, thus producing an incorrect action (or suppressing the correct action).

Another kind of manipulation involves poisoning the model - deliberately feeding a machine learning algorithm with biased or bad data, in order to disrupt or skew its behaviour. (Historical analogy: manipulation of pop music charts.)

We have to assume that some bad actors will have access to the latest technologies, and will themselves be using machine learning and other techniques to design these attacks, and this sets up an arms race between the good guys and the bad guys. Is there any way to keep advanced technologies from getting in the wrong hands?

In the security world, people are familiar with the concept of Distributed Denial of Service (DDOS). But perhaps this now becomes Distributed Distortion of Service. Which may be more subtle but no less dangerous.

While there are strong arguments for algorithmic transparency of automated systems, some people may be concerned that transparency will aid such attacks. The argument here is that the more adversaries can discover about the algorithm and its training data, the more opportunities for manipulation. But it would be wrong to conclude that we should keep algorithms safe by keeping them secret ("security through obscurity"). A better conclusion would be that transparency should be a defence against manipulation, by making it easier for stakeholders to detect and counter such attempts.

John Bates, Algorithmic Terrorism (Apama, 4 August 2010). To Catch an Algo Thief (Huffington Post, 26 Feb 2015)

Nick Bostrom and Eliezer Yudkowsky, The Ethics of Artificial Intelligence (2011)

Ian Goodfellow, Patrick McDaniel and Nicolas Papernot, Making Machine Learning Robust Against Adversarial Inputs (Communications of the ACM, Vol. 61 No. 7, July 2018) Pages 56-66. See also video interview with Papernot.

Neil Strauss, Are Pop Charts Manipulated? (New York Times, 25 January 1996)

Wikipedia: Security Through Obscurity

Related posts: The Unexpected Happens (January 2017)

Tuesday, July 16, 2019

Nudge Technology

People are becoming aware of the ways in which AI and big data can be used to influence people, in accordance with Nudge Theory. Individuals can be nudged to behave in particular ways, and large-scale social systems (including elections and markets) can apparently be manipulated. In other posts, I have talked about the general ethics of nudging systems. This post will concentrate on the technological aspects.

Technologically mediated nudges are delivered by a sociotechnical system we could call a Nudge System. This system might contain several different algorithms and other components, and may even have a human-in-the-loop. Our primary concern here is about the system as a whole.

As an example, I am going to consider a digital advertisement in a public place, which shows anti-smoking messages whenever it detects tobacco smoke.

Typically, a nudge system would perform several related activities.

1. There would be some mechanism for "reading" the situation. For example, detecting the events that might trigger a nudge, as well as determining the context. This might be a simple sense-and-respond or it might include some more sophisticated analysis, using some kind of model. There is typically an element of surveillance here. In our example, let us imagine that the system is able to distinguish different brands of cigarette, and determine how many people are smoking in its vicinity.

2. Assuming that there was some variety in the nudges produced by the system, there would be a mechanism for selecting or constructing a specific nudge, using a set of predefined nudges or nudge templates. For example, different anti-smoking messages for the expensive brands versus the cheap brands. Combined with other personal data, the system might even be able to name and shame the smokers.

3. There would then be a mechanism for delivering or otherwise executing the nudge. For example private (to a person's phone) or public (via a display board). We might call this the nudge agent. In some cases, the nudge may be delivered by a human, but prompted by an intelligent system. If the nudge is publicly visible, this could allow other people to infer the circumstances leading to the nudge - therefore a potential breach of privacy. (For example, letting your friends and family know that you were having a sneaky cigarette, when you had told them that you had given up.)

4. In some cases, there might be a direct feedback loop, giving the system immediate data about the human response to the nudge. Obviously this will not always be possible. Nevertheless, we would expect the system to retain a record of the delivered nudges, for future analysis. To support multiple feedback tempos (as discussed in my work on Organizational Intelligence) there could be multiple components performing the feedback and learning function. Typically, the faster loops would be completely automated (autonomic) while the slower loops would have some human interaction.

There would typically be algorithms to support each of these activities, possibly based on some form of Machine Learning, and there is the potential for algorithmic bias at several points in the design of the system, as well as various forms of inaccuracy (for example false positives, where the system incorrectly detects tobacco smoke). More information doesn't always mean better information - for example, someone might design a sensor that would estimate the height of the smoker, in order to detect underage smokers - but this obviously introduces new possibilities of error.

In many cases, there will be a separation between the technology engineers who build systems and components, and the social engineers who use these systems and components to produce some commercial or social effects. This raises two different ethical questions.

Firstly, what does responsible use of nudge technology look like - in other words, what are the acceptable ways that nudge technology can be deployed. What purposes, what kind of content, the need for testing and continuous monitoring to detect any signals of harm or bias, and so on. Should the nudge be private to the nudgee, or could it be visible to others? What technical and organizational controls should be in place before the nudge technology is switched on?

And secondly, what does responsible nudge technology look like - in other words, one that can be used safely and reliably, with reasonable levels of transparency and user control.

We may note that nudge technologies can be exploited by third parties with a commercial or political intent. For example, there are constant attempts to trick or subvert the search and recommendation algorithms used by the large platforms, and Alex Hern recently reported on Google's ongoing battle to combat misinformation and promotion of extreme content. So one of the requirements of responsible nudge technology is being Robust Against Manipulation.

We may also note that if there is any bias anywhere, this may either be inherent in the design of the nudge technology itself, or may be introduced by the users of the nudge technology when customizing it for a specific purpose. For example, nudges may be deliberately worded as "dog whistles" - designed to have a strong effect on some subjects while being ignored by others - and this can produce significant and possibly unethical bias in the working of the system. But this bias is not in the algorithm but in the nudge templates, and there may be other ways in which bias is relevant to nudging in general, so the question of algorithmic bias is not the whole story.

Alex Hern, Google tweaked algorithm after rise in US shootings (Guardian, 2 July 2019)

Wikipedia: Nudge Theory

Related posts: Organizational Intelligence in the Control Room (October 2010), On the Ethics of Technologically Mediated Nudge (May 2019), The Nudge as a Speech Act (May 2019), Algorithms and Governmentality (July 2019)