tag:blogger.com,1999:blog-74154302024-03-13T01:33:21.600+00:00Technology UpdateCommentary and analysis on advanced products and platforms, by Richard VeryardRichard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.comBlogger273125tag:blogger.com,1999:blog-7415430.post-71681955086070226102023-05-14T23:48:00.003+01:002023-07-12T10:29:46.839+01:00From ChatGPT to Infinite Sets<p>In a recent article on ChatGPT, Evgeny Morozov mentioned the psychoanalyst Ignacio Matte Blanco. Many psychoanalysts regard the unconscious as structured like a language, and Matte Blanco is known for developing a mathematical model of the unconscious based on infinite sets. Meanwhile, in a recent talk at CRASSH Cambridge, Marcus Tomalin described the inner
workings of ChatGPT and similar large language models (LLM) as advanced matrix algebra. So what can Matte Blanco's model tell us about ChatGPT and the mathematics that underpins it?</p>
<p>Eric Rayner explains Matte Blanco's model as follows</p><blockquote><q>The unconscious, since it can simultaneously contain great numbers of generalized ideas, notions, propositional functions or emotional conceptions is, as it were, a capacious dimensional <q>mathematician</q>.</q> <cite>Rayner p 93</cite></blockquote>
<p>Between 2012 and 2018, <a href="https://www.researchgate.net/profile/Fionn-Murtagh-2">Fionn Murtagh</a> published several papers on the relationship between Matte Blanco's model and the mathematics underpinning data analytics. He notes one of the key elements of the model as the fact that "the unconscious does not know individuals but only classes or propositional functions which define the class". </p><p>Apart from Professor Murtagh's papers, I have not found any other references to Matte Blanco in this context. I have however found several papers that reference Lacan, including an interesting one by Luca Possati who argues that <q>the originality of AI lies in its profound connection with the human unconscious</q>.</p><p>The ability of large language models to become disconnected from some conventional notion of reality is typically called hallucination. Naomi Klein objects to the anthropomorphic implications of this word, and her point is well taken given the political and cultural context in which it is generally used, but the word nonetheless seems appropriate if we are to follow a psychoanalytic line of inquiry.<br /></p>
<blockquote><q>Without the self having a containing framework of awareness of asymmetrical relations play breaks down into delusion.</q> <cite>Rayner p 37</cite></blockquote>
<p>Perhaps the most exemplary situation of hallucination is where chatbots imagine facts about themselves. In his talk, Dr Tomalin reports a conversation he had with the chatbot BlenderBot 3. He tells it that his dog had just died; BlenderBot 3 replies that it has two dogs, named Baxter and Maxwell. No doubt a human psychopath might consciously lie about such matters in order to fake empathy, but even if we regard the chatbot as a stochastic psychopath (as Tomalin suggests) it is not clear that the chatbot is consciously lying. If androids can dream of electric sheep, why can't chatbots dream of dog ownership?</p><p>Or to put it another way, and using Matte Blanco's bi-logic, if the unconscious is structured like a language, symmetry demands that language is structured like the unconscious.<br /></p><p><br /></p><hr /><p>Naomi Klein, <a href="https://www.theguardian.com/commentisfree/2023/may/08/ai-machines-hallucinating-naomi-klein">AI machines aren’t ‘hallucinating’. But their makers are</a>
(Guardian, 8 May 2023)<br /></p><p>Evgeny Morozov, <a href="https://www.theguardian.com/commentisfree/2023/mar/30/artificial-intelligence-chatgpt-human-mind">The problem with artificial intelligence? It’s neither artificial nor intelligent</a> (Guardian, 30 March 2023)</p><p>Fionn Murtagh, Ultrametric Model of Mind, I: Review (2012) <a href="https://arxiv.org/abs/1201.2711">https://arxiv.org/abs/1201.2711</a><br /></p><p>Fionn Murtagh, Ultrametric Model of Mind, II: Review (2012) <a href="https://arxiv.org/abs/1201.2719">https://arxiv.org/abs/1201.2719</a></p><p>Fionn Murtagh, <a href="http://www.language-and-psychoanalysis.com/article/view/1581">Mathematical
Representations of Matte Blanco’s Bi-Logic, based on Metric Space and
Ultrametric or Hierarchical Topology: Towards Practical Application</a> (Language and Psychoanalysis, 2014, 3 (2), 40-63) <br /></p><p>Luca Possati, <a href="https://doi.org/10.1057/s41599-020-0445-0">Algorithmic unconscious: why psychoanalysis helps in understanding AI</a> (Palgrave Communications, 2020)</p><p>Eric Rayner , Unconscious Logic: An introduction to Matte Blanco's Bi-Logic and its uses (London: Routledge, 1995)<br /></p><p>Marcus Tomalin, <a href="https://www.youtube.com/watch?v=AzP_PV_NeGk">Artificial Intelligence: Can Systems Like ChatGPT Automate Empathy</a> (CRASSH Cambridge, 31 March 2023)</p><p>Stephen Wolfram, <a href="https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/">What is ChatGPT doing … and why does it work?</a> (14 February 2023)</p><p><br /></p><p>Related post: <a href="https://rvsoftware.blogspot.com/2023/04/chatbotics-coercion-of-senses.html">Chatbiotics: Coercion of the Senses</a> (April 2023), <a href="https://posiwid.blogspot.com/2023/07/the-mad-hatter-works-out.html">The Mad Hatter Works Out</a> (July 2023)</p>Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0tag:blogger.com,1999:blog-7415430.post-30095065997760421782023-04-11T08:08:00.004+01:002023-05-15T00:00:39.455+01:00Chatbotics - Coercion of the Senses<p>In a recent talk at CRASSH Cambridge, Marcus Tomalin described the inner workings of ChatGPT and similar large language models (LLM) as advanced matrix algebra, and asked whether we could really regard these systems as manifesting empathy. A controversial 2021 paper (which among other things resulted in Timnit Gebru's departure from Google) characterized large language models as <b>stochastic parrots</b>. Tomalin suggested we could also regard them as <b>stochastic psychopaths</b>, given the ability of (human) psychopaths to manipulate people. While psychopaths are generally thought to lack the kind of affective empathy that other humans possess, they are sometimes described as possessing <b>cold empathy</b> or <b>dark empathy</b>, which enables them to control other people's emotions.</p><p>If we want to consider whether an algorithm can display empathy, we could ask the same question about other constructed entities including organizations. Let's start with so-called empathetic marketing. Tomalin's example was the L'Oreal slogan <q>because you're worth it</q>.</p><p>If some instances of marketing are described in terms of "empathy",
where is the empathy supposed to be located? In the case of the L'Oreal
slogan, relevant affect may be situated not just in the consumer but
also individuals working for the company. The copywriter who created the slogan in 1971 was Ilon Specht. Many years later she told Malcolm Gladwell, <q>It was very personal. I can recite to you the whole commercial, because I was so angry when I wrote it</q>. Gladwell quoted a friend of hers as saying <q>Ilon had a degree of neurosis that made her very interesting</q>. </p><p>And then there is Joanne Dusseau, the model who first spoke the words.</p><p style="margin-left: 40px; text-align: left;">“I took the tag line seriously,” she says. “I felt it all those
thousands of times I said it. I never took it for granted. Over time, it
changed me for the better.” (Vogue)</p><p>So if this is what it takes to produce and sustain one of the most effective and long-lasting marketing messages, what affective forces can large language models assemble? Or to put it another way, how might empathy emerge?</p><p>Another area where algorithmic empathy needs careful consideration is in mental health. There are many apps that claim to provide help to people with mental health issues. If these apps appear to display any kind of empathy with the user, this might increase the willingness of the user to accept any guidance or nudge. (In a psychotherapeutic context, this could be framed in terms of transference, with the algorithm playing the role of the "subject supposed to know".) Over the longer term, it might result in over-reliance or dependency.</p><p>One of the earliest recorded examples of a person confiding in a pseudo-therapeutic machine was when Joseph Weizenbaum's secretary was caught talking to ELIZA. Katherine Hayles offers an interesting interpretation of this incident, suggesting that ELIZA might have seemed to provide the dispassionate and non-judgemental persona that human therapists take years of training to develop.<br /></p><p>I did some work a few years ago on technology ethics in relation to nudging. This was largely focused on the actions that the nudge might encourage. I need to go back and look at this topic in terms of empathy and affect. Watch this space.</p><p> </p><hr /><p>Emily Bender et al, <a href="https://doi.org/10.1145/3442188.3445922">On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?</a> (FAccT '21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, March 2021 Pages 610–623)</p><p>Malcolm Gladwell, <a href="https://www.newyorker.com/magazine/1999/03/22/hair-dye-clairol-loreal-malcolm-gladwell">True Colors: Hair dye and the hidden history of postwar America</a> (New Yorker, 22 March 1999)</p><p>N Katherine Hayles, Trauma of Code (Critical Inquiry, Vol. 33, No. 1, Autumn 2006, pp. 136-157)<br /></p>Naomi Pike, <a href="https://www.vogue.co.uk/beauty/article/loreal-paris-because-youre-worth-it">As L’Oréal Paris’ Famed Tagline “Because You’re Worth It” Turns 50, The Message Proves As Poignant As Ever</a> (Vogue, 8 March 2021)<p>Marcus Tomalin, <a href="https://www.youtube.com/watch?v=AzP_PV_NeGk">Artificial Intelligence: Can Systems Like ChatGPT Automate Empathy</a> (CRASSH Cambridge, 31 March 2023) </p><p>Related posts: <a href="https://rvsoftware.blogspot.com/2019/05/towards-chatbot-ethics.html">Towards Chatbot Ethcs</a> (May 2019), <a href="https://rvsoftware.blogspot.com/2021/12/the-sad-reality-of-chatbotics.html">The Sad Reality of Chatbotics</a> (December 2021), <a href="https://rvsoftware.blogspot.com/2023/05/from-chatgpt-to-infinite-sets.html">From ChatGPT to infinite sets</a> (May 2023)<br /></p>Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0tag:blogger.com,1999:blog-7415430.post-48218429974439339672022-04-10T09:54:00.004+01:002022-04-11T09:43:26.394+01:00Lie Detectors at Airports<p>@<a href="https://twitter.com/jamesvgingerich/status/1512466297451433990">jamesvgingerich</a> reports that the EU is putting lie detector robots on its borders. @<a href="https://twitter.com/Abebab/status/1512869221105123335">Abebab</a> is horrified.</p><p></p><blockquote class="twitter-tweet"><p dir="ltr" lang="en">this is a horrible move. AI/robots CAN'T detect lies. looks like even the EU falls for AI snake oil. <a href="https://t.co/Kdl3uIfMOq">https://t.co/Kdl3uIfMOq</a></p>— Abeba Birhane (@Abebab) <a href="https://twitter.com/Abebab/status/1512869221105123335?ref_src=twsrc%5Etfw">April 9, 2022</a></blockquote><p> <script async="" charset="utf-8" src="https://platform.twitter.com/widgets.js"></script> </p><p>There are several things worth noting here. </p><p>Firstly, lie detectors work by detecting involuntary actions (eye movements, heart rate) that are thought to be a proxy for mendacity. But there are often alternative explanations for these actions, and so the interpretation of these is highly problematic. See my post on <a href="https://demandingchange.blogspot.com/2008/06/memory-and-law.html">Memory and the Law</a> (June 2008)<br /></p><p>Secondly, there is a lot of expensive and time-wasting technology installed at airports already, which has dubious value in detecting genuine threats, but may help to make people feel safer. Bruce Schneier calls this <a href="https://en.wikipedia.org/wiki/Security_theater">Security Theatre</a>. See my posts on the <a href="https://posiwid.blogspot.com/2019/06/false-sense-of-security.html">False Sense of Security</a> (June 2019) and <a href="https://rvsoapbox.blogspot.com/2008/03/heathrow-terminal-5.html">Identity, Privacy and Security at Heathrow Terminal Five</a> (March 2008).<br /></p><p>What is more important is to consider the consequences of adding this component (whether reliable or otherwise) to the larger system. In my post <a href="https://rvsoftware.blogspot.com/2019/06/listening-for-trouble.html">Listening for Trouble</a> (June 2019), I discussed the use of Aggression Detection microphones in US schools, following an independent study that was carried out with active collaboration from the supplier of the equipment. Obviously this kind of evaluation requires some degree of transparency.</p><p>Most important of all is the ethical question. Is this technology biased against certain categories of subject, and what are the real-world consequences of being falsely identified by this system? Is having a <q>human in the loop</q> sufficient protection against the dangers of algorithmic profiling? See <a href="https://posiwid.blogspot.com/2021/03/algorithmic-bias.html">Algorithmic Bias</a> (March 2021).</p><p>Given the inaccuracy of detection, there may be a significant rate of false positives and false negatives. False positives affect the individual concerned, suffering consequences ranging from inconvenience and delay to much worse. False negatives mean that a person has got away with an undetected lie, so this has consequences for society as a whole. </p><p>How much you think this matters depends on what you think they might be lying about, and how important this is. For example, it may be quicker to say you packed your suitcase yourself and it hasn't been out of your sight, even if this is not strictly true, because any other answer may trigger loads of other time-wasting questions. However, other lies may be more dangerous ... </p><p> </p><hr />
<p>For more details on the background of this initiative, see<br /></p>
<p>Matthias Monroy, <a href="https://digit.site36.net/2021/04/26/eu-project-iborderctrl-is-the-lie-detector-coming-or-not/">EU project iBorderCtrl: Is the lie detector coming or not?</a>
(26 April 2021)<br /></p><p></p>Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0tag:blogger.com,1999:blog-7415430.post-43285169224228433852021-12-23T23:29:00.004+00:002023-05-15T01:33:50.671+01:00The Sad Reality of Chatbotics<p>As I noted in my previous post on chatbotics <a href="https://rvsoftware.blogspot.com/2019/05/towards-chatbot-ethics.html">Towards Chatbot Ethics</a> (May 2019), the chatbot has sometimes been pitched as a kind of Holy Grail. Which prompts the question I discussed before - <i>whom shall the chatbot serve</i>?</p><p>Chatbots are designed to serve their master - and this is generally the organization that runs them, not necessarily the consumer, even if you have paid good money to have one of these curious cylinders in your home. For example, Amazon's Alexa is supposed to encourage consumers to access other Amazon services, including retail and entertainment - and this is how Amazon expects to make a financial return on the sale and distribution of these devices.<br /></p><p>But how well do they work even for them? The journalist Priya Anand (who tweets at @PriyasIdeas) has been following this question for a while. Back in 2018, she talked to digital marketing experts who warned that voice shopping was unlikely to take off quickly. Her latest article notes the attempts by Amazon Alexa to nudge consumers into shopping, which may simply cause some frustrated consumers to switch the thing off altogether. Does this explain the apparently high attrition rates?</p><p>If you are selling a device at a profit, it may not matter if people don't use it much. But if you are selling a device at an initial loss, expecting to recoup the money when the device is used, then you have to find ways of getting people to use the thing. </p><p>Perhaps if Amazon can use its Machine Learning chops to guess what we want before we've even said anything, then the chatbots can cut out some of the annoying chatter. Apparently Alexa engineers think this would be more natural. Others might argue Natural's Not In It. (Coercion of the senses? We're not so gullible.)<br /></p><p><br /></p><hr /><p>Priya Anand, <a href="https://www.theinformation.com/articles/the-reality-behind-voice-shopping-hype">The Reality Behind Voice Shopping Hype</a> (The Information, 6 August 2018)</p><p>Priya Anand, <a href="https://www.bloomberg.com/news/articles/2021-12-22/amazon-s-voice-controlled-smart-speaker-alexa-can-t-hold-customer-interest-docs">Amazon’s Alexa Stalled With Users as Interest Faded, Documents Show</a> (Bloomberg, 22 December 2021)</p><p>
Daphne Leprince-Ringuet, <a href="https://www.zdnet.com/article/alexa-can-now-guess-what-you-want-before-you-even-ask-for-it/">Alexa can now guess what you want before you even ask for it</a>
(ZDNet, 13 November 2020)<br /></p><p>Tom McKay, <a href="https://gizmodo.com/report-almost-nobody-is-using-amazons-alexa-to-actuall-1828148762">Report: Almost Nobody Is Using Amazon's Alexa to Actually Buy Stuff</a> (Gizmodo, 6 August 2018)</p><p>Chris Matyszczyk, <a href="https://www.zdnet.com/article/amazon-wants-you-to-keep-quiet-for-a-brilliantly-sinister-reason/">Amazon wants you to keep quiet, for a brilliantly sinister reason</a>
(ZDNet, 4 November 2021)<br /></p><p>Related posts: <a href="https://rvsoftware.blogspot.com/2019/05/towards-chatbot-ethics.html">Towards Chatbot Ethics</a> (May 2019), <a href="https://demandingchange.blogspot.com/2019/09/technology-and-discreet-cough.html">Technology and the Discreet Cough</a> (September 2019), <a href="https://rvsoftware.blogspot.com/2023/04/chatbotics-coercion-of-senses.html">Chatbiotics: Coercion of the Senses</a> (April 2023)</p>Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0tag:blogger.com,1999:blog-7415430.post-75075492976399255212021-09-28T12:51:00.001+01:002021-09-28T22:31:00.465+01:00Automation and the Red Queen Effect<p>Product vendors and technology advisory firms often talk about accelerating automation. A popular way of presenting advice is in the form of an executive survey - <q><i>look, all your peers are thinking about this, so you'd better spend some money with us too</i></q>. Thus the Guardian reports a survey carried out by one of the large consulting firms, which concluded that <q>almost half of company bosses in 45 countries are speeding up plans to automate their businesses</q>. In the early days of the COVID-19 pandemic, many saw this as a great opportunity to push automation further, although more recent commentators have been more sceptical.</p><p>Politicians have also bought into this narrative. For example, Barack Obama's farewell presidential address referred to <q>the relentless pace of automation</q>.<br /></p><p>For technical change more generally, there is a common belief that things are constantly getting faster. In my previous posts on what is sometimes known as the <a href="https://demandingchange.blogspot.com/search/label/red%20queen%20effect">Red Queen Effect</a>, I have expressed the view that perceptions of technological change often seem to be distorted by proximity and a subjective notion of significance - certain types of recent innovation being regarded as more important and exciting than other or older innovations.<br /></p><p>Aaron Benanav takes a similar view. <br /></p><p></p><blockquote><q>Our collective sense that the pace of labor-saving technological change is accelerating is an illusion. It’s like the feeling you get when looking out of the window of a train car as it slows down at a station: passing cars on the other side of the tracks appear to speed up. Labor-saving technical change appears to be happening at a faster pace than before only when viewed from across the tracks – that is, from the standpoint of our ever more slow-growing economies.</q> <cite>Benanav 2020</cite></blockquote><p></p><p>Benanav also notes that the automation narrative has been around since the days of Karl Marx.</p><p></p><blockquote><q>Visions of automated factories then appeared again in the 1930s, 1950s and 1980s, before their re-emergence in the 2010s.</q> <cite>Benanav 2019</cite></blockquote>Meanwhile, Judy Wajcman argues (referencing Lucy Suchman) that the automation narrative typically relies on overlooking the human labour that is required to keep the computers and robots working efficiently - especially the low-paid work of data coders and content checkers. Further evidence of this has recently been published by Phil Jones.<br /><p></p><p> </p><hr />
<p><a href="https://www.theguardian.com/world/2020/mar/30/bosses-speed-up-automation-as-virus-keeps-workers-home">Bosses speed up automation as virus keeps workers home</a> (Guardian, 30 March 2020)</p><p><a href="https://www.economist.com/finance-and-economics/2021/06/19/is-the-pandemic-accelerating-automation-dont-be-so-sure">Is the pandemic accelerating automation? Don’t be so sure</a> (Economist, 19 June 2021) subscription required<br /></p><p>Aaron Benanav, Automation and the Future of Work. Part One (New Left Review 119 Sept/Oct 2019) Part Two (New Left Review 120, Nov/Dec 2019)<br /></p><p>Aaron Benanav, <a href="https://www.theguardian.com/commentisfree/2020/jan/23/robots-economy-growth-wages-jobs">Automation isn't wiping out jobs. It's that our engine of growth is winding down</a> (Guardian, 23 January 2020)</p><p>Phil Jones, <a href="https://www.versobooks.com/books/3869-work-without-the-worker">Work without the worker - Labour in the age of platform capitalism</a> (Verso, 2021) Extract published as <a href="https://restofworld.org/2021/refugees-machine-learning-big-tech/">Refugees help power machine learning advances at Microsoft, Facebook, and Amazon</a> (Rest of World, 22 September 2021)<br /></p><p>Toby McClean, <a href="https://www.forbes.com/sites/forbestechcouncil/2021/01/05/automation-is-accelerating-at-the-edge-to-improve-workplace-safety-productivity/">Automation Is Accelerating At The Edge To Improve Workplace Safety, Productivity</a> (Forbes, 5 January 2021) </p><p>Judy Wajcman, Automation: is it really different this time? (British Journal of Sociology, 2017)<br /></p><p>Chris Wiltz, <a href="https://www.designnews.com/automation-motion-control/grocery-automation-accelerating-thanks-coronavirus">Grocery Automation Is Accelerating Thanks to the Coronavirus</a> (Grocery News, 16 April 2020)<br /></p>Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0tag:blogger.com,1999:blog-7415430.post-29251731118809919272021-04-27T09:04:00.002+01:002021-04-27T22:35:28.294+01:00The Allure of the Smart City<p>The concept of smart city seems to encompass a broad range of sociotechnical initiatives, including community-based healthcare, digital electricity, housing affordability and sustainability, next-generation infrastructure, noise pollution, quality of air and water, robotic furniture, transport and mobility, and urban planning. The smart city is not a technology as such, more like an assemblage of technologies.<br /></p><p>Within this mix, there is often a sincere attempt to address some serious social and environmental concerns, such as reducing the city's carbon footprint. However, Professor Rob Kitchen notes a tendency towards greenwashing or even ethics washing.<br /></p><p>Kitchen also raises concerns about <b>civic paternalism</b> - city authorities and their tech partners knowing what's best for the citizenry.<br /></p><p>On the other hand, John Koetsier makes the point that <b><i>If-We-Don't-Do-It-The-Chinese-Will</i></b>. This point was also recently made by Jeremy Fleming in his 2021 Vincent Briscoe lecture. (See my post on the <a href="https://demandingchange.blogspot.com/2021/04/on-invisibility-of-infrastructure.html">Invisibility of Infrastructure</a>.)<br /></p><p>Meanwhile, here is a small and possibly unrepresentative sample of Smart City initiatives in the West that have reached the press recently. <br /></p><ul style="text-align: left;"><li>Madrid with IBM<br /></li><li>Portland with Google Sidewalk - cancelled Feb 2021</li><li>San Jose with Intel - <a href="https://newsroom.intel.com/news-releases/san-jose-implements-intel-technology-for-a-smarter-city/">pilot programme</a><br /></li><li>Toronto with Google Sidewalk (Quayside) - cancelled May 2020<br /></li></ul><p> </p><hr /><p> </p><p>Daniel Doctoroff, <a href="https://medium.com/sidewalk-talk/why-were-no-longer-pursuing-the-quayside-project-and-what-s-next-for-sidewalk-labs-9a61de3fee3a">Why we’re no longer pursuing the Quayside project — and what’s next for Sidewalk Labs</a>
(Sidewalk Talk, 7 May 2020)</p><p><a href="https://www.kitchin.org/">Rob Kitchen</a>, <a href="https://www.rte.ie/brainstorm/2019/0425/1045602-the-ethics-of-smart-cities/">The Ethics of Smart Cities</a> (RTE 27 April 2019)<br /></p><p>John Koetsier, <a href="https://www.forbes.com/sites/johnkoetsier/2020/05/13/9-things-we-lost-when-google-canceled-its-smart-cities-project-in-toronto/">9 Things We Lost When Google Canceled Its Smart Cities Project In Toronto</a> (Forbes, 13 May 2020) </p><p>Ryan Mark and Gregory Anya, <a href="https://doi.org/10.29297/orbit.v2i2.110">Ethics of Using Smart City AI and Big Data: The Case of Four Large European Cities</a> (Orbit, Vol 2/2, 2019)<br /></p><p>Juan Pedro Tomás, <a href="https://enterpriseiotinsights.com/20171005/smart-cities/san-jose-moves-towards-integral-smart-city-initiative-tag23-tag99">Smart city case study: San Jose, California</a> (Enterprise IOT Insights, 5 October 2017)<br /></p><p>Jane Wakefield, <a href="https://www.bbc.co.uk/news/technology-47815344">The Google city that has angered Toronto</a> (BBC News, 18 May 2019), <a href="https://www.bbc.co.uk/news/technology-56168306">Google-linked smart city plan ditched in Portland</a> (BBC News, 23 February 2021)<br /></p><p> </p><p>See also <a href="https://rvsoftware.blogspot.com/2017/12/iot-is-coming-to-town.html">IOT is coming to town</a> (December 2017), <a href="https://demandingchange.blogspot.com/2021/04/on-invisibility-of-infrastructure.html">On the invisibility of infrastructure</a> (April 2021)<br /></p><p> </p>Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0tag:blogger.com,1999:blog-7415430.post-2878814964612169452021-02-03T22:47:00.004+00:002021-02-04T17:48:50.509+00:00Andy Jassy <p>Most people still think of Amazon primarily as an online retailer, but the elevation of Andy Jassy to take over from Jeff Bezos as CEO provides further evidence for the strategic importance of Amazon Web Services (AWS) within the Amazon group. </p><p>AWS was launched in 2002 and relaunched in 2006. In March 2008, Om Malik published an interview with Ray Ozzie, then the Chief Software Architect at Microsoft, which included some positive comments about AWS. By the end of the year, both Google and Microsoft had announced rival cloud computing offerings. As far as I can see, cloud computing first appeared as an Emerging Technology on the Gartner Hype Curve (it's not a cycle) in 2008, reaching the Peak of Inflated Expectations by 2009.</p><p>During that period, I was a software industry analyst, calling out Jeff Bezos and Ray Ozzie as two of the most visionary players in the industry. My colleague Lawrence Wilkes wrote a long report on AWS in 2004. (But the hype around cloud computing took off later, and the broader awareness of AWS is comparatively recent, so I'm not convinced that the classic hype curve applies to this topic.)<br /></p><p>Alongside the news of Jassy's elevation, today's tech press also reports that Google Cloud is still making massive losses. So much for the Slope of Enlightenment then.<br /></p><p><br /></p><hr /><p> </p><p>Jasper Jolly, <a href="https://www.theguardian.com/technology/2021/feb/03/bezos-leaves-amazon-in-its-prime-keeping-it-that-way-is-the-task">Bezos leaves Amazon in its prime – keeping it that way is the task</a> (The Guardian, 3 February 2021)<br /></p><p>Kieren McCarthy, <a href="https://www.theregister.com/2021/02/03/jeff_bezos_jassy/">So Jeff Bezos is stepping back from Amazon to play with his space rockets. Who's this Andy Jassy chap?</a>
(The Register, 3 February 2021)</p><p>Om Malik, <a href="https://gigaom.com/2008/03/10/the-gigaom-interview-ray-ozzie-microsoft-corp/">GigaOM Interview: Ray Ozzie</a> (GigaOM, 10 March 2008)</p><p>Ron Miller, <a href="https://techcrunch.com/2021/02/02/what-andy-jassys-promotion-to-amazon-ceo-could-mean-for-aws/">What Andy Jassy’s promotion to Amazon CEO could mean for AWS</a>
(TechCrunch, 2 February 2021)<br /></p><p>Simon Sharwood, <a href="https://www.theregister.com/2021/02/03/google_q4_2020/">Google's cloud services lost $14.6bn over three years – and CEO Sundar Pichai likes that trajectory</a>
(The Register, 3 February 2021)</p><p>Lawrence Wilkes, <a href="http://everware-cbdi.com/public/downloads/0ikgw/journal2004-10.pdf">Amazon and eBay Web Services - The new enterprise applications?</a> (CBDI Journal, October 2004) </p><p><br /></p><p>Related posts: <a href="https://rvsoapbox.blogspot.com/2004/02/jeff-bezos-and-ecosystem-thinking.htm">Jeff Bezos and Ecosystem Thinking</a> (February 2004),<span></span> <a href="https://rvsoapbox.blogspot.com/2004/08/amazon-and-ebay.htm">Amazon and eBay</a> (August 2004), <a href="https://rvsoapbox.blogspot.com/2005/11/internet-services-disruption.htm">Internet Service Disruption</a> (November 2005), <a href="https://rvsoapbox.blogspot.com/2008/03/ray-ozzie.html">Ray Ozzie</a> (March 2008), <a href="https://rvsoapbox.blogspot.com/2008/03/utility-computing-and-profitability.html">Utility Computing and Profitability</a> (March 2008)</p><p>Also <a href="https://demandingchange.blogspot.com/2005/09/technology-hype-curve.html">Technology Hype Curve</a> (September 2005)<br /></p>Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0tag:blogger.com,1999:blog-7415430.post-15178088018861660982019-12-29T10:38:00.011+00:002021-05-29T08:59:41.189+01:00The Allure of the Smart HomeWhat exactly is a smart home, and why would I want to live in one?<br />
<br />
I don't think the smart home concept is just about having the latest cool technology or containing some smart stuff. And many of the most commonly discussed examples of smart technology in the home seem to be merely modest improvements on earlier technologies, rather than something entirely new.<br />
<br />
Let's look at some smart devices you might have in your home.
Programmable thermostats have been available for ages, adjusting heating
and/or air conditioning to maintain a comfortable
temperature at certain times of day.
Modern heating systems can now offer separate controls for each room,
and be
programmed to reduce your total energy consumption: such systems are
typically marketed as <q>intelligent</q> systems. So whatever smart technology is doing in this area looks more like useful improvement than radical change.<br />
<br />
Or how about remote control functionality. Remote control devices have been around for a long time, especially for
couch potatoes who wished to change TV channels without the effort of walking a few feet across the room. Now we have voice-activated controls, for people who can't even be bothered to search under the cushions for the remote control device. Voice activation may be a bit more technologically sophisticated than pushing buttons, and some artificial intelligence may be required to recognize and interpret the voice commands, but it's basically the same need that is being satisfied here.<br />
<br />
Or how about a chatbot to answer your questions? In most cases, the answers aren't hard-wired into the device, but are pulled from some source outside the home. So the chatbot is merely a communication device, as if you had a telephone hotline to Stephen Fry only faster and always available, like several million Stephen Fry clones working in parallel around the clock. (You may choose any other knowledgeable and witty celebrity if you prefer.)<br />
<br />
And the idea that having a chatbot device in your home <b>makes the home itself smart</b> is like thinking that having a smartphone in the pocket of your trousers turns them into <q>smart trousers</q>. Or that having Stephen Fry's phone number attached to your fridge door turns it into a <q>smart fridge</q>.<br />
<br />
Of course, a smart system may have multiple components - different classes of device. You might install an intelligent security
system, using cameras and other devices, to recognize and admit your children and pets, while keeping
the home safe from unwanted visitors.<br />
<br />
But surely the concept of smart home means more than
just having a number of smart parts or subsystems, it implies that the home itself manifests
some intelligence at the <b>whole-system </b>level. The primary requirement seems to be that these smart devices are connected, not to the world outside the home, but to <b>each other</b>, enabling them to <b>orchestrate </b>things. Not just <b>home automation</b>, but <b>seamless home automation</b>.<br />
<br />
For example, suppose I make my home responsible for getting me to work on time. My home computer could monitor the traffic reports or disruption on public transport, check with my car whether I needed to allow extra time to refuel, send a message to my alarm clock to wake me up at the optimal time, having also instructed the heating system when to switch the boiler on. <br />
<br />
Assuming I do not wish my movements to be known in advance by burglars and kidnappers, all of these messages need to be secure against eavesdropping. It isn't obvious to me why it would be necessary to transmit these messages via servers outside my house. Yes I know it's called the internet of things, but does that mean everything has to go via the internet?
<br />
<br />
Well yes apparently it does, if we follow the recently announced <b>Connected Home over IP</b> (CHIP) standards, to be developed jointly by Amazon, Apple, Google, and most of the other key players in the smart home market.<br />
<br />
Many of those who commented on the Register article raised concerns about encryption. It seems unlikely at this point that the tech giants will be keen on end-end encryption, because surely they are going to want to feed your data into the hungry mouths of their machine learning starlings. So whatever security measures are included in the CHIP standards, they will probably represent a compromise, appearing to take security seriously but not seriously impeding the commercial and strategic interests of the vendors. Smart for them, not necessarily for us. <br />
<br />
Sometimes it seems that the people who benefit most from the smart home are not those actually living in these homes but service providers, using your data to keep an eye on you. For example, landlords: <br />
<blockquote class="tr_bq">
<q>Smart home technology is an alluring proposition for the apartment
industry: Provide renters with a home that integrates with and responds
to their lifestyle, and increase rents, save on energy, and collect
useful resident population data in return.</q> <cite>Kayla Devon</cite></blockquote>
<blockquote class="tr_bq">
<q>Internet-connected locks and facial recognition systems have raised privacy concerns among tenants across the country. A sales pitch directed at landlords by a smart-home security company indicates that the technology could help them raise rental prices and potentially get people evicted.</q> <cite>Alfred Ng</cite></blockquote>
<blockquote class="tr_bq">
<q>We should pass a law that would hold smart access companies to the
highest possible standard while making certain that their technology is
safe, secure and reliable for tenants.</q> <cite>Michael McKee</cite></blockquote>
<br />
Energy companies have been pushing smart meters and other smart technologies, supposedly to help you reduce your energy bills, but also to get involved in other aspects of your life. For example, Constellation promotes the benefits of smart home technology for maintaining the independence of the elderly, while Karen Jorden mentions the possibility of remote surveillance by family members living elsewhere.<br />
<blockquote class="tr_bq">
<q>Smart technology that recognizes patterns, such as the morning
coffee-making routine mentioned earlier, could come in handy when those
patterns are broken, perhaps alerting grown children that something may
be amiss with an elderly parent.</q> <cite>Karen Jordan</cite></blockquote><p>
As Ashlee Clark Thompson points out, this kind of remote surveillance can benefit the children as well as the parents, providing peace of mind as well as reducing the need for physical visits to check up. </p><p>And doubtless the energy companies have other ideas as well. According to Ross Clark <br /></p><blockquote><q>Scottish and Southern Electricity Networks has proposed a system in which it will be able to
turn off certain devices in our homes ... when the supply of electricity
is too small to meet demand.</q></blockquote><p>Finally, Ian Dunt grumbled that his smart thermostat was like having a secret flatmate. <br /></p>
<blockquote class="twitter-tweet" data-conversation="none"><p dir="ltr" lang="en">It's really not clear to me why I paid extra to be undermined by my own heating.</p>— Ian Dunt (@IanDunt) <a href="https://twitter.com/IanDunt/status/1328296196255977474?ref_src=twsrc%5Etfw">November 16, 2020</a></blockquote><p> and got dozens of Tweets in reply, from people with similar frustrations. <br /></p><p>So we keep coming back to the fundamental ethical question: <i><b>Whom shall the smart home serve?</b></i><br />
<br />
</p><hr /><p>
<br />Footnote May 2021</p><p>Some legal advice for landlords just in from US law firm Orrick: "Tenant data may be an attractive source of new revenue, but landlords should proceed with caution" (<a href="https://www.orrick.com/en/Insights/2021/05/Unlocking-the-Value-of-Tenant-Data">13 May 2021</a>). They also note that "New York City Council has enacted a <a href="https://legistar.council.nyc.gov/LegislationDetail.aspx?ID=4196254&GUID=29A4B0E2-4C1F-472B-AE88-AE10B5313AC1&Options=&Search=" rel="noopener noreferrer" target="_blank">Tenant Data Privacy Act</a> that is poised to enhance privacy protections in multifamily buildings in the city" (<a href="https://www.orrick.com/en/Insights/2021/05/Home-Alone-New-York-City-Enacts-Tenant-Data-Privacy-Act">27 May 2021</a>).</p><p><br /></p><hr /><p>Dieter Bohn, <a href="https://www.theverge.com/2019/12/19/21028256/smart-home-standard-google-apple-amazon-alexa-siri-zigbee-choip">Situation: there are too many competing smart home standards. Surely a new one will fix it, right?</a> (The Verge, 19 Dec 2019)</p><p></p><p>Ross Clark, <span class="break-words"><span dir="ltr"><a href="https://www.telegraph.co.uk/news/2020/09/19/critics-smart-meters-right-along2/">The critics of smart meters were right all along</a> (Telegraph, 19 September 2020) HT </span></span><span class="break-words"><span dir="ltr"><span class="break-words"><span dir="ltr"></span></span> @<a href="https://www.linkedin.com/posts/theopriestley_the-critics-of-smart-meters-were-right-all-activity-6713008585267277824-Ytab">tprstly</a></span></span><br />
<br />
Constellation, <a href="https://blog.constellation.com/2018/07/20/smart-home-for-seniors/">Smart Homes Allow the Aging to Maintain Independence</a> (published 20 July 2018 updated
13 August 2018)<br />
<br />
Kayla Devon, <a href="https://www.multifamilyexecutive.com/technology/the-lure-of-the-smart-apartment_o">The Lure of the Smart Apartment</a> (MFE, 31 March 2016)<br />
<br />
Karen Jordan, <a href="https://www.forbes.com/sites/bisnow/2017/08/28/set-it-and-forget-it-the-lure-of-smart-apartments/">Set It And Forget It: The Lure Of Smart Apartments</a> (Forbes, 28 August 2017)<br />
<br />
Kieren McCarthy, <a href="https://www.theregister.co.uk/2019/12/18/iot_standards_war/">The IoT wars are over, maybe? Amazon, Apple, Google give up on smart-home domination dreams, agree to develop common standards</a> (The Register, 18 Dec 2019)<br />
<br />
Michael McKee, <a href="https://www.nytimes.com/2019/12/17/opinion/smart-access-tenants-rights.html">Your Landlord Could Know That You’re Not at Home Right Now</a> (New York Times, 17 December 2019)<br />
<br />
Alfred Ng, <a href="https://www.cnet.com/news/install-smart-home-tech-evict-renters-surveillance-company-tells-landlords/">Smart home tech can help evict renters, surveillance company tells landlords</a> (CNET, 25 October 2019)<br />
<br />
Ashlee Clark Thompson, <a href="https://www.cnet.com/news/how-to-have-the-tech-talk-with-your-aging-parents/">Persuading your older parents to take the smart home leap</a> (CNET, 11 April 2017)</p><p>Shannon Yavorsky and David Curtis, <a href="https://www.orrick.com/en/Insights/2021/05/Unlocking-the-Value-of-Tenant-Data">Unlocking the Value of Tenant Data</a> (Orrick 13 May 2021), <a href="https://www.orrick.com/en/Insights/2021/05/Home-Alone-New-York-City-Enacts-Tenant-Data-Privacy-Act">Home Alone? New York City Enacts Tenant Data Privacy Act</a> ( Orrick 27 May 2021) HT @<a href="https://twitter.com/christinayiotis/status/1398331309508923394">christinayiotis</a><br />
</p><p></p><p><br />
Related posts: <a href="http://rvsoftware.blogspot.co.uk/2015/06/understanding-value-chain-of-internet.html">Understanding the Value Chain of the Internet of Things</a> (June 2015),
<a href="http://rvsoftware.blogspot.co.uk/2015/10/defeating-device-paradigm.html">Defeating the Device Paradigm</a> (Oct 2015), <a href="https://rvsoftware.blogspot.com/2019/02/hidden-functionality.html">Hidden Functionality</a> (February 2019), <a href="https://rvsoftware.blogspot.com/2019/05/towards-chatbot-ethics.html">Towards Chatbot Ethics - Whom does the chatbot serve?</a> (May 2019), <a href="https://rvsoftware.blogspot.com/2019/05/whom-does-technology-serve.html">Driverless cars - Whom does the technology serve?</a> (May 2019), <a href="https://rvsoapbox.blogspot.com/2019/06/the-road-less-travelled.html">The Road Less Travelled - Whom does the algorithm serve?</a> (June 2019)<br /></p><p> </p><p> <span style="font-size: xx-small;">Updated 16 November 2020, 29 May 2021</span> <br /></p><p></p>Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0tag:blogger.com,1999:blog-7415430.post-20024260691153569162019-10-11T12:17:00.000+01:002020-03-05T14:36:32.005+00:00Insights and Challenges from Mulesoft Connect 2019<span id="goog_1672133137"></span><a href="https://www.blogger.com/"></a><span id="goog_1672133138"></span>#CONNECT19 @MuleSoft is a significant player in the Integration Platform as a Service (iPaaS) market. I've just spent some time at their annual customer event in London. <br />
<br />
Over the years, I've been to many events like this. A technology company organizes an event, possibly repeated at several locations around the globe, attended by customers and prospects, business partners, employees and others. After an introduction of loud music and lights shining in your face, the CEO or CTO or honoured guest bounces onto the stage and provides a keynote speech. Outside, there will be exhibition stands with product demonstrations, as well as information about complementary products and services.<br />
<br />
At such events, we are presented with an array of messages from the company and its business partners, with endorsements and some useful insights from a handful of customers. So how to analyse and evaluate these messages? <br />
<br />
Firstly, what's new. In the case of Mulesoft, the core technology vision of microserves, networked applications and B2B ecosystems has been around for many years. (At the CBDi Forum, we were talking about some of this stuff over ten years ago.) But it's useful to see how far the industry has got towards this vision, and how much further there is to go. In his presentation, Mulesoft CTO Uri Sarid described a complex ecosystem that might exist by around 2025, including demand-side orchestration of services. There is a fair amount of technology for supply-side orchestration of APIs, but demand-side orchestration isn't really there yet.<br />
<br />
Furthermore, organizations are often cautious about releasing the APIs into the wild. For example, government departments may make APIs available to other government departments, local governments and the NHS, but in many cases it is not yet possible for citizens to consume these APIs directly, or for a third party (such as a charity) to act as a mediator. However, some sectors have made progress in this direction, thanks to initiatives such as Open Banking.<br />
<br />
However, the technology has allowed all sorts of things to be done much faster and more reliably, so I heard some good stories about the speed with which new functionality can be rolled out across multiple endpoints. As some of the technical obstacles are removed, IT people should be able to shift their attention to business transformation, or even imagination and innovation. <br />
<br />
MuleSoft argues that the API economy will drive / is driving a rapid cycle of (incremental) innovation, accelerating the pace of change in some ecosystems. MuleSoft is enthusiastic about <q>citizen integration</q> or <q>democratization</q>, shifting the initiative from the Town Planners to the Settlers and Pioneers. However, if APIs are to serve as reusable building blocks, they need to be built to last. (There is an important connection between ideas of trimodal development and ideas of pace layering, which needs to be teased out further.)<br />
<br />
<br />
Secondly, what's different. Not just differences between the past and the present, but differences between Mulesoft and other vendors with comparable offerings. At a given point in time, each competing product will provide slightly different sets of features at different price points, but feature comparisons can get outdated very quickly. And if you are acquiring this kind of technology, you would ideally like to know the total cost of ownership, and the productivity you are likely to get. It takes time and money to research such questions properly, and I'm not surprised that so many people rely on versions of the Magic Sorting Hat produced by the large analyst firms.<br />
<br />
By the way, this is not just about comparing Mulesoft with other iPaaS vendors, but comparing iPaaS with other technologies, such as Robotic Process Automation (RPA).<br />
<br />
<br />
And thirdly, what's missing. Although I heard business strategy for APIs mentioned several times, I didn't hear much about how this could be done. Several speakers warned against using the term API for a non-technical audience, and advised people to talk about service benefit.<br />
<br />
But how to identify and analyse service benefit? How do you identify the service value that can
be delivered through APIs, how do you determine the right level of
granularity and asset-specificity, and what are the design principles? In other words, how do you get the business requirements that drive the use of MuleSoft
or other iPaaS products? I
button-holed a few consultants from the large consultancies, but the
answers were mostly disappointing.<br />
<br />
<br />
I plan to attend some similar events this month, and shall write a couple of general follow-up posts. <br />
<br />
<br />
<hr />
<br />
<b>Notes</b><br />
<br />
Here's an article I wrote in 2002,
which mentioned Sun Microsystems' distinction between micro services and
macro services.<br />
<br />
Richard Veryard, <a href="http://everware-cbdi.com/public/downloads/YVpAx/journal2002-02.pdf">Identifying Web Services</a> (CBDI Journal, February 2002)<br />
<br />
And here are two articles discussing demand-side orchestration.<br />
<br />
Richard Veryard and Philip Boxer, <a href="https://docs.microsoft.com/en-us/previous-versions/aa480051(v=msdn.10)">Metropolis and SOA Governance</a> (Microsoft Architecture Journal, July 2005)<br />
<br />
Philip Boxer and Richard Veryard, <a href="https://docs.microsoft.com/en-us/previous-versions/bb245658(v=msdn.10)">Taking Governance to the Edge</a> (Microsoft Architecture Journal, August 2006)<br />
<br />
See also <a href="http://everware-cbdi.com/cbdi-journal-archive">CBDI Journal Archive</a><br />
<br />
Link to MuleSoft presentations <a href="https://library.mulesoft.com/mulesoft-connect-london-2019/" target="_blank">https://library.mulesoft.com/mulesoft-connect-london-2019/</a><br />
<br />
<br />
Related post: <a href="https://rvsoapbox.blogspot.com/2019/10/strategy-and-requirements-for-api.html">Strategy and Requirements for the API Ecosystem</a> (October 2019) Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0tag:blogger.com,1999:blog-7415430.post-84633120859782618262019-08-09T10:26:00.000+01:002019-10-10T11:32:57.751+01:00RPA - Real Value or Painful Experimentation?In May 2017, Fran Karamouzis of Gartner stated that "96% of clients are getting real value from RPA" (Robotic Process Automation). But by October/November 2018, RPA was declared to be at the top of the Gartner "hype cycle", also known as the <b>Peak of Inflated Expectations</b>.<br />
<br />
So from a peak of inflated expectations we should not be surprised to see RPA now entering a trough of disillusionment, with surveys showing significant levels of user dissatisfaction. Phil Fersht of HfS explains this in terms that will largely be familiar from previous technological innovations.<br />
<ul>
<li>The over-hyping of how "easy" this is
</li>
<li>Lack of real experiences being shared publicly
</li>
<li>Huge translation issues between business and IT
</li>
<li>Obsession with "numbers of bots deployed" versus quality of outcomes
</li>
<li>Failure of the "Big iron" ERP vendors and the digital juggernauts to embrace RPA </li>
</ul>
"You can't focus on a tools-first approach to anything." adds @<a href="https://www.horsesforsources.com/rpa_dissatisfaction_get_real_080119#comment-16295">jpmorgenthal</a><br />
<br />
There are some generic models and patterns of technology adoption and diffusion that are largely independent of the specific technology in question. When Everett Rogers and his colleagues did the original research on the adoption of new technology by farmers in the 1950s, it made sense to identify a spectrum of attitudes, with "innovators" and "early adopters" at one end, and with "late adopters" or "laggards" at the other end. Clearly some people can be attracted by a plausible story of future potential, while others need to see convincing evidence that an innovation has already succeeded elsewhere.<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://upload.wikimedia.org/wikipedia/commons/thumb/1/11/Diffusion_of_ideas.svg/330px-Diffusion_of_ideas.svg.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="248" data-original-width="330" height="240" src="https://upload.wikimedia.org/wikipedia/commons/thumb/1/11/Diffusion_of_ideas.svg/330px-Diffusion_of_ideas.svg.png" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Diffusion of Innovations (Source: Wikipedia)</td></tr>
</tbody></table>
<br />
Obviously adoption by organizations is a slightly more complicated matter than adoption by individual farmers, but we can find a similar spread of attitudes within a single large organization. There may be some limited funding to carry out early trials of selected technologies (what Fersht describes as "sometimes painful experimentation"), but in the absence of positive results it gets progressively harder to justify continued funding. Opposition from elsewhere in the organization comes not only from people who are generally sceptical about technology adoption, but also from people who wish to direct the available resources towards some even newer and sexier technology. The "pioneers" have moved on to something else, and the "settlers" aren't yet ready to settle. There is a discontinuity in the adoption curve, which Geoffrey Moore calls "crossing the chasm".<br />
<br />
<i><span style="font-size: x-small;">Note: The terms "pioneers" and "settlers" refers to the trimodal approach. See my post <a href="https://rvsoapbox.blogspot.com/2016/05/beyond-bimodal.html">Beyond Bimodal</a> (May 2016).</span></i><br />
<br />
But as Fersht indicates, there are some specific challenges for RPA in particular. Although it's supposed to be about process automation, some of the use cases I've seen are simply doing localized application patching, using robots to perform adhoc swivel-chair integration. Not even paving the cow-paths, but paving the workarounds. Tool vendors such as KOFAX recommend specific robotic types for different patching requirements. The problem with this patchwork approach to automation is that while each patch may make sense in isolation, the overall architecture progressively becomes more complicated. <br />
<br />
There is a common view of process optimization that suggests you concentrate on fixing the bottlenecks, as if the rest of the process can look after itself, and this view has been adopted by many people in the RPA world. For example Ayshwarya Venkataraman, who describes herself on Linked-In as a technology evangelist, asserts that "process optimization can be easily achieved by automating some tasks in a process". <br />
<br />
But fixing a bottleneck in one place often exposes a bottleneck somewhere else. Moreover, complicated workflow solutions may be subject to Braess's paradox, which says that under certain circumstances adding capacity to a network can actually slow it down. So you really need to understand the whole end-to-end process (or system-of-systems). <br />
<br />
And there's an ethical point here as well. Human-computer processes need to be designed not only for efficiency and reliability but also for job satisfaction. The robots should be configured to serve the people, not just taking over the easily-automated tasks and leaving the human with a fragmented and incoherent job serving the robots.<br />
<br />
And the more bots you've got (the more bot licences you've bought), the challenge shifts from getting each bot to work properly to combining large numbers of bots in a meaningful and coordinated way. Adding a single robotic patch to an existing process may deliver short-term benefits, but how are users supposed to mobilize and combine hundreds of bots in a coherent and flexible manner, to deliver real lasting enterprise-scale value? Ravi Ramamurthy believes that a rich ecosystem of interoperable robots will enable a proliferation of automation - but we aren't quite there yet.<br />
<br />
<hr />
<br />
Phil Fersht, <a href="https://www.horsesforsources.com/gartner-rpa-overhype_052317">Gartner: 96% of customers are getting real value from RPA? Really?</a> (HfS 23 May 2017), <a href="https://www.horsesforsources.com/rpa_dissatisfaction_get_real_080119">With 44% dissatisfaction, it's time to get real about the struggles of RPA 1.0</a> (HfS, 31 July 2019)<br />
<br />
Geoffrey Moore, Crossing the Chasm (1991) <br />
<br />
Susan Moore, <a href="https://www.gartner.com/en/newsroom/press-releases/2019-06-24-gartner-says-worldwide-robotic-process-automation-sof">Gartner Says Worldwide Robotic Process Automation Software Market Grew 63% in 2018</a> (Gartner, 24 June 2019)<br />
<br />
Ravi Ramamurthy, <a href="https://www.linkedin.com/pulse/robotic-automation-just-patchwork-ravi-ramamurthy/">Is Robotic Automation just a patchwork?</a> (6 December 2015)<br />
<br />
Everett Rogers, Diffusion of Innovations (First published 1962, 5th edition 2003)<br />
<br />
Daniel Schmidt, <a href="https://www.kofax.com/Blog/2018/april/robotic-process-automation-4-indispensable-types-of-robots-and-how-to-use-them">4 Indispensable Types of Robots (and How to Use Them)</a> (KOFAX Blog, 10 April 2018)<br />
<br />
Alex Seran, <a href="https://huronconsultinggroup.com/resources/enterprise-solutions/value-of-robotic-process-automation">More than Hype: Real Value of Robotic Process Automation (RPA)</a> (Huron, October 2018)<br />
<br />
Sony Shetty, <a href="https://www.gartner.com/en/newsroom/press-releases/2018-11-13-gartner-says-worldwide-spending-on-robotic-process-automation-software-to-reach-680-million-in-2018">Gartner Says Worldwide Spending on Robotic Process Automation Software to Reach $680 Million in 2018</a> (Gartner, 13 November 2018)<br />
<br />
Ayshwarya Venkataraman, <a href="https://blog.aspiresys.com/digital/big-data-analytics/rpa-renounces-swivel-chair-automation-digital-workforce/">How Robotic Process Automation Renounces Swivel Chair Automation with a Digital Workforce
</a>(Aspire Systems, 5 June 2018)<br />
<br />
<br />
Wikipedia: <a href="https://en.wikipedia.org/wiki/Braess%27s_paradox">Braess's Paradox</a>, <a href="https://en.wikipedia.org/wiki/Diffusion_of_innovations">Diffusion of Innovations</a>, <a href="https://en.wikipedia.org/wiki/Technology_adoption_life_cycle">Technology Adoption Lifecycle</a><br />
<br />
<br />
Related posts: <a href="https://rvsoapbox.blogspot.com/2019/08/process-automation-and-intelligence.html">Process Automation and Intelligence</a> (August 2019), <a href="https://demandingchange.blogspot.com/2019/08/automation-ethics.html">Automation Ethics</a> (August 2019) Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0tag:blogger.com,1999:blog-7415430.post-82242950567994836962019-07-18T09:59:00.000+01:002019-07-31T12:56:51.032+01:00Robust Against ManipulationAs algorithms get more sophisticated, so do the opportunities to trick them. An algorithm can be forced or nudged to make incorrect decisions, in order to yield benefits to a (hostile) third party. John Bates, one of the pioneers of Complex Event Processing, has raised fears of <b>algorithmic terrorism</b>, but algorithmic manipulation may also be motivated by commercial interests or simple vandalism.<br />
<br />
An extreme example of this could be a road sign that reads STOP to
humans but is misread as something else by self-driving cars. Another
example might be false signals that are designed to trigger algorithmic
trading and thereby nudge markets. Given the increasing reliance on
automatic screening machines at airports and elsewhere, there are
obvious incentives for smugglers and terrorists to develop ways of
fooling these machines - either to get their stuff past the machines, or
to generate so many false positives that the machines aren't taken
seriously. And of course email spammers are always looking for ways to bypass the spam filters.<br />
<br />
<blockquote class="tr_bq">
"It will also become increasingly important that AI algorithms be <b><i>robust against manipulation</i></b>. A machine
vision system to scan airline luggage for bombs must be robust against human adversaries deliberately
searching for exploitable flaws in the algorithm - for example, a shape that, placed next to a pistol
in one's luggage, would neutralize recognition of it. Robustness against manipulation is an ordinary
criterion in information security; nearly <i><b>the </b></i>criterion. But it is not a criterion that appears often in
machine learning journals, which are currently more interested in, e.g., how an algorithm scales upon
larger parallel systems." [Bostrom and Yudkowsky]</blockquote>
<br />
One kind of manipulation involves the construction of misleading examples (known in the literature as "<b>adversarial examples</b>"). For example, an example that exploits the inaccuracies of a specific image recognition algorithm to produce an image that will be incorrectly classified, thus producing an incorrect action (or suppressing the correct action).<br />
<br />
Another kind of manipulation involves <b>poisoning the model</b> - deliberately feeding a machine learning algorithm with biased or bad data, in order to disrupt or skew its behaviour. (Historical analogy: manipulation of pop music charts.)<br />
<br />
We have to assume that some bad actors will have access to the latest technologies, and will themselves be using machine learning and other techniques to design these attacks, and this sets up an arms race between the good guys and the bad guys. Is there any way to
keep advanced technologies from getting in the wrong hands?<br />
<br />
In the security world, people are familiar with the concept of Distributed Denial of Service (DDOS). But perhaps this now becomes Distributed Distortion of Service. Which may be more subtle but no less dangerous.<br />
<br />
While there are strong arguments for algorithmic transparency of automated systems, some people may be concerned that transparency will aid such attacks. The argument here is that the more adversaries can discover about the algorithm and its training data, the more opportunities for manipulation. But it would be wrong to conclude that we should keep algorithms safe by keeping them secret ("<b>security through obscurity</b>"). A better conclusion would be that transparency should be a defence against manipulation, by making it easier for stakeholders to detect and counter such attempts. <br />
<br />
<br />
<hr />
<br />
John Bates, <a href="http://apama.typepad.com/my_weblog/2010/08/algorithmic-terrorism.html">Algorithmic Terrorism</a> (Apama, 4 August 2010). <a href="http://www.huffingtonpost.com/john-bates/to-catch-an-algo-thief_b_6759286.html">To Catch an Algo Thief</a> (Huffington Post, 26 Feb 2015) <br />
<br />
Nick Bostrom and Eliezer Yudkowsky, The Ethics of Artificial Intelligence (2011) <br />
<br />
Ian Goodfellow, Patrick McDaniel and Nicolas Papernot, <a href="https://cacm.acm.org/magazines/2018/7/229030-making-machine-learning-robust-against-adversarial-inputs/fulltext">Making Machine Learning Robust Against Adversarial Inputs</a>
(Communications of the ACM, Vol. 61 No. 7, July 2018) Pages 56-66. See also <a href="https://vimeo.com/272647433">video interview with Papernot</a>.<br />
<br />
Neil Strauss, <a href="https://www.nytimes.com/1996/01/25/arts/are-pop-charts-manipulated.html">Are Pop Charts Manipulated?</a> (New York Times, 25 January 1996)
<br />
<br />
Wikipedia: <a href="https://en.wikipedia.org/wiki/Security_through_obscurity">Security Through Obscurity</a><br />
<br />
Related posts: <a href="https://rvsoapbox.blogspot.com/2017/01/the-unexpected-happens.html">The Unexpected Happens</a> (January 2017) Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0tag:blogger.com,1999:blog-7415430.post-84066785358191356562019-07-16T11:15:00.001+01:002023-04-11T07:42:02.537+01:00Nudge TechnologyPeople are becoming aware of the ways in which AI and big data can be used to influence people, in accordance with <a href="https://en.wikipedia.org/wiki/Nudge_theory"><b>Nudge Theory</b></a>. Individuals can be nudged to behave in particular ways, and large-scale social systems (including elections and markets) can apparently be manipulated. In other posts, I have talked about the general ethics of nudging systems. This post will concentrate on the technological aspects.<br />
<br />
Technologically mediated nudges are delivered by a sociotechnical system we could call a <b>Nudge System</b>. This system might contain several different algorithms and other components, and may even have a human-in-the-loop. Our primary concern here is about the system as a whole.<br />
<br />
As an example, I am going to consider a digital advertisement in a public place, which shows anti-smoking messages whenever it detects tobacco smoke.<br />
<br />
Typically, a nudge system would perform several related activities.<br />
<br />
1. There would be some mechanism for "reading" the situation. For example, detecting the events that might trigger a nudge, as well as determining the context. This might be a simple sense-and-respond or it might include some more sophisticated analysis, using some kind of model. There is typically an element of surveillance here. In our example, let us imagine that the system is able to distinguish different brands of cigarette, and determine how many people are smoking in its vicinity.<br />
<br />
2. Assuming that there was some variety in the nudges produced by the system, there would be a mechanism for selecting or constructing a specific nudge, using a set of predefined nudges or nudge templates. For example, different anti-smoking messages for the expensive brands versus the cheap brands. Combined with other personal data, the system might even be able to name and shame the smokers. <br />
<br />
3. There would then be a mechanism for delivering or otherwise executing the nudge. For example private (to a person's phone) or public (via a display board). We might call this the <b>nudge agent</b>. In some cases, the nudge may be delivered by a human, but prompted by an intelligent system. If the nudge is publicly visible, this could allow other people to infer the circumstances leading to the nudge - therefore a potential breach of privacy. (For example, letting your friends and family know that you were having a sneaky cigarette, when you had told them that you had given up.)<br />
<br />
4. In some cases, there might be a direct feedback loop, giving the system immediate data about the human response to the nudge. Obviously this will not always be possible. Nevertheless, we would expect the system to retain a record of the delivered nudges, for future analysis. To support multiple feedback tempos (as discussed in my work on Organizational Intelligence) there could be
multiple components performing the feedback and learning function.
Typically, the faster loops would be completely automated (autonomic)
while the slower loops would have some human interaction.<br />
<br />
There would typically be algorithms to support each of these activities, possibly based on some form of Machine Learning, and there is the potential for algorithmic bias at several points in the design of the system, as well as various forms of inaccuracy (for example false positives, where the system incorrectly detects tobacco smoke). More information doesn't always mean better information - for example, someone might design a sensor that would estimate the height of the smoker, in order to detect underage smokers - but this obviously introduces new possibilities of error.<br />
<br />
In many cases, there will be a separation between the technology engineers who build systems and components, and the social engineers who use these systems and components to produce some commercial or social effects. This raises two different ethical questions.<br />
<br />
Firstly, what does <b>responsible use of nudge technology</b> look like - in other words, what are the acceptable ways that nudge technology can be deployed. What purposes, what kind of content, the need for testing and continuous monitoring to detect any signals of harm or bias, and so on. Should the nudge be private to the nudgee, or could it be visible to others? What technical and organizational controls should be in place before the nudge technology is switched on?<br />
<br />
And secondly, what does <b>responsible nudge technology</b> look like - in other words, one that can be used safely and reliably, with reasonable levels of transparency and user control.<br />
<br />
We may note that nudge technologies can be exploited by third parties with a commercial or political intent. For example, there are constant attempts to trick or subvert the search and recommendation algorithms used by the large platforms, and Alex Hern recently reported on Google's ongoing battle to combat misinformation and promotion of extreme content. So one of the requirements of responsible nudge technology is being <b><a href="https://rvsoftware.blogspot.com/2019/07/robust-against-manipulation.html">Robust Against Manipulation</a></b>.<br />
<br />
We may also note that if there is any bias anywhere, this may either be inherent in the design of the nudge technology itself, or may be introduced by the users of the nudge technology when customizing it for a specific purpose. For example, nudges may be deliberately worded as "dog whistles" - designed to have a strong effect on some subjects while being ignored by others - and this can produce significant and possibly unethical bias in the working of the system. But this bias is not in the algorithm but in the nudge templates, and there may be other ways in which bias is relevant to nudging in general, so the question of algorithmic bias is not the whole story. <br />
<br />
<hr />
<br />
Alex Hern, <a href="https://www.theguardian.com/technology/2019/jul/02/google-tweaked-algorithm-after-rise-in-us-shootings">Google tweaked algorithm after rise in US shootings</a> (Guardian, 2 July 2019)<br />
<br />
Wikipedia: <a href="https://en.wikipedia.org/wiki/Nudge_theory">Nudge Theory</a><br />
<br />
Related posts: <a href="https://demandingchange.blogspot.com/2010/10/orgintelligence-in-control-room.html">Organizational Intelligence in the Control Room</a> (October 2010), <a href="https://demandingchange.blogspot.com/2019/05/on-ethics-of-technologically-mediated.html">On the Ethics of Technologically Mediated Nudge</a> (May 2019), <a href="https://demandingchange.blogspot.com/2019/05/the-nudge-as-speech-act.html">The Nudge as a Speech Act</a> (May 2019), <a href="https://demandingchange.blogspot.com/2019/07/algorithms-and-governmentality.html">Algorithms and Governmentality</a> (July 2019), <a href="https://rvsoftware.blogspot.com/2019/07/robust-against-manipulation.html">Robust Against Manipulation</a> (July 2019)<br />
<br />
<br />Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0tag:blogger.com,1999:blog-7415430.post-38037225808204063812019-06-26T17:29:00.002+01:002022-04-10T13:20:06.656+01:00Listening for TroubleMany US schools and hospitals have installed Aggression Detection microphones that claim to detect sounds of aggression, thus allowing staff or security personnel to intervene to prevent violence. Sound Intelligence, the company selling the system, claims that the detector has helped to reduce aggressive incidents. What are the ethical implications of such systems?<br />
<br />
ProPublica recently tested one such system, enrolling some students to produce a range of sounds that might or might not trigger the alarm. They also talked to some of the organizations using it, including a hospital in New Jersey that has now decommissioned the
system, following a trial that (among other things) failed to detect a
seriously agitated patient. ProPublica's conclusion was that the system was "less than reliable".<br />
<br />
Sound Intelligence is a Dutch company, which has been fitting microphones into street cameras for over ten years, in the Netherlands and elsewhere in Europe. This was approved by the Dutch Data Protection Regulator on the argument that the cameras are only switched on after someone screams, so the privacy risk is reduced.<br />
<br />
But Dutch cities can be pretty quiet. As one of the developers admitted to the New Yorker in 2008, "We don’t have enough aggression to train the system properly". Many experts have questioned the validity of installing the system in an entirely different environment, and Sound Intelligence refused to reveal the source of the training data, including whether the data had been collected in schools.<br />
<br />
In theory, a genuine scream can be identified by a sound pattern that indicates a partial loss of control of the vocal chords, although the accurate detection of this difference can be compromised by audio distortion (known as clipping). When people scream on demand, they protect their vocal chords and do not produce the same sound. (Actors are taught to simulate screams, but the technology can supposedly tell the difference.) So it probably matters whether the system is trained and tested using real screams or fake ones. (Of course, one might have difficulty persuading an ethics committee to approve the systematic production and collection of real screams.)<br />
<br />
Can any harm can be caused by such technologies? Apart from the fact that schools may be wasting money on stuff that doesn't actually work, there is a fairly diffuse harm of unnecessary surveillance. Students may learn to suppress all varieties of loud noises, including sounds of celebration and joy. There may also be opportunities for the technologies to be used as a tool for harming someone - for example, by playing a doctored version of a student's voice in order to get that student into trouble. Or if the security guard is a bit trigger-happy, killed.<br />
<br />
Technologies like this can often be gamed. For example, a student or ex-student planning an act of violence would be aware of the system and would have had ample opportunity to test what sounds it did or didn't respond to.
<br />
<br />
Obviously no technology is completely risk-free. If a technology provides genuine benefits in terms of protecting people from real threats, then this may outweigh any negative side-effects. But if the benefits are unproven or imaginary, as ProPublica suggests, this is a more difficult equation.<br />
<br />
ProPublica quoted a school principal from a quiet leafy suburb, who justified the system as providing "a bit of extra peace of mind". This could be interpreted as a desire to reassure parents with a <a href="https://posiwid.blogspot.com/2019/06/false-sense-of-security.html">false sense of security</a>. Which might be justifiable if it allowed children and teachers to concentrate on schoolwork rather than worrying unnecessarily about unlikely scenarios, or pushing for more extreme measures such as arming the teachers. (But there is always an ethical question mark over security theatre of this kind.)<br />
<br />
But let's go back to the nightmare scenario that the system is supposed to protect against. If a school or hospital equipped with this system were to experience a mass shooting incident, and the system failed to detect the incident quickly
enough (which on the ProPublica evidence seems quite likely), the
incident investigators might want to look at sound recordings from the
system. Fortunately, these microphones "allow administrators to record,
replay and store those snippets of conversation indefinitely". So that's
alright then.<br />
<br />
In addition to publishing its findings, ProPublica also published the methodology used for testing and analysis. The first point to note is that this was done with the active collaboration from the supplier. It seems they were provided with good technical information, including the internal architecture of the device and the exact specification of the microphone used. They were able to obtain an exactly equivalent microphone, and could rewire the device and intercept the signals. They discarded samples that had been subject to clipping.<br />
<br />
The effectiveness of any independent testing and evaluation is clearly affected by the degree of transparency of the solution, and the degree of cooperation and support provided by the supplier and the users. So this case study has implications, not only for the testing of devices, but also for transparency and system access.<br />
<br />
<br />
<hr />
<br />
Jack Gillum and Jeff Kao, <a href="https://features.propublica.org/aggression-detector/the-unproven-invasive-surveillance-technology-schools-are-using-to-monitor-students/">Aggression Detectors: The Unproven, Invasive Surveillance Technology Schools Are Using to Monitor Students</a> (ProPublica, 25 June 2019)
<br />
<br />
Jeff Kao and Jack Gillum, <a href="https://projects.propublica.org/graphics/aggression-detector-data-analysis">Methodology: How We Tested an Aggression Detection Algorithm</a> (ProPublica, 25 June 2019)
<br />
<br />
John Seabrook, <a href="https://www.newyorker.com/magazine/2008/06/23/hello-hal">Hello, Hal</a> (New Yorker, 16 June 2008)<br />
<br />
P.W.J. van Hengel and T.C. Andringa, <a href="https://www.researchgate.net/publication/4308142_Verbal_aggression_detection_in_complex_social_environments">Verbal aggression detection in complex social environments</a> (IEEE Conference on Advanced Video and Signal Based Surveillance, 2007)<br />
<br />
<a href="http://www.statewatch.org/subscriber/protected/sw16n56.pdf">Groningen makes “listening cameras" permanent</a> (Statewatch, Vol 16 no 5/6, August-December 2006)<br />
<br />
Wikipedia: <a href="https://en.wikipedia.org/wiki/Clipping_(audio)">Clipping (Audio)</a><br />
<br />
Related posts: <a href="https://rvsoftware.blogspot.com/2019/03/affective-computing.html">Affective Computing</a> (March 2019), <a href="https://posiwid.blogspot.com/2019/06/false-sense-of-security.html">False Sense of Security</a> (June 2019)<br />
<br />
<br />
<span style="font-size: xx-small;">Updated 28 June 2019. Thanks to Peter Sandman for pointing out a lack of clarity in the previous version.</span>Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0tag:blogger.com,1999:blog-7415430.post-64013024004565293902019-05-11T12:21:00.003+01:002023-04-08T13:48:17.898+01:00Whom does the technology serve?When regular hyperbole isn't sufficient, writers often refer to new technologies as <b>The Holy Grail</b> of something or other. As I pointed out in my post on <a href="https://rvsoftware.blogspot.com/2019/05/towards-chatbot-ethics.html">Chatbot Ethics</a>, this has some important ethical implications. <br />
<br />
Because in the mediaeval Parsifal legend, at a key moment in the story, our hero fails to ask the critical question: <i><b>Whom Does the Grail Serve?</b></i> And when technologists enthuse about the latest inventions, they typically overlook the same question: <i><b>Whom Does the Technology Serve?</b></i><br />
<br />
In
a new article on driverless cars, Dr Ashley Nunes of MIT, argues
that academics have allowed themselves to be distracted by versions of
the Trolley Problem (<i><b>Whom Shall the Vehicle Kill?</b></i>), and neglected some much more important ethical questions.<br />
<br />
For one thing, Nunes argues that the so-called autonomous vehicles are never going to be
fully autonomous. There will always be ways of controlling cars
remotely, so the idea of a lone robot struggling with some ethical
dilemma is just philosophical science fiction. Last year, he told Jesse Dunietz that he hasn't yet found a safety-critical transport system without real-time human oversight.<br />
<br />
And in any case, road safety is never
about one car at a time, it is about <b>deconfliction </b>- which means cars avoiding each other as well as pedestrians. With
human driving, there are multiple deconfliction mechanisms to allow many vehicles to occupy the same space without hitting each other. These include traffic
signals, road markings and other conventions indicating right
of way, signals (including honking and flashing lights) to negotiate between drivers, or for drivers to show that they
are willing to wait for a pedestrian to cross the road in front of them.
Equivalent mechanisms will be required to enable so-called autonomous
vehicles to provide a degree of transparency of intention, and therefore
trust. (See Matthews et al. See also Applin and Fischer). See my post on the <a href="https://rvsoapbox.blogspot.com/2019/05/the-ethics-of-interoperability.html">Ethics of Interoperability</a>.<br />
<br />
<br />
But according to Nunes, "the most important
question that we should be asking about this technology" is "Who stands to
gain from its life-saving potential?" Because "if those who most need it don’t have access, whose lives would we actually be saving?" <br />
<br />
In other words, <i><b>Whom Does The Grail Serve?</b></i><br />
<br />
<br />
<hr /><p>
<br />
Sally Applin and Michael Fischer, Applied Agency: Resolving Multiplexed Communication in Automobiles (<a href="https://www.auto-ui.org/12/docs/AutomotiveUI-2012-Adjunct-Proceedings.pdf">Adjunct Proceedings of the 4th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI '12), October 17–19, 2012, Portsmouth, NH, USA</a>) HT @<a href="https://twitter.com/AnthroPunk/status/1127447467799109632">AnthroPunk</a><br />
<a href="https://twitter.com/AnthroPunk/status/1127447467799109632"></a><br />
Rachel Coldicutt, <a href="https://medium.com/doteveryone/tech-ethics-who-are-they-good-for-4afedcf105c3">Tech ethics, who are they good for?</a> (8 June 2018)<br />
<br />
Jesse Dunietz, <a href="https://undark.org/2018/11/22/opinion-self-driving-trains/">Despite Advances in Self-Driving Technology, Full Automation Remains Elusive</a> (Undark, 22 November 2018) HT @<a href="https://twitter.com/SafeSelfDrive/status/1126532344762945536">SafeSelfDrive</a><br />
<br />
Ashley Nunes, <a href="https://www.nature.com/articles/d41586-019-01473-3">Driverless cars: researchers have made a wrong turn</a> (Nature Briefing, 8 May 2019) HT @<a href="https://twitter.com/vdignum/status/1127134470291775489">vdignum</a> @<a href="https://twitter.com/HumanDriving/status/1126531129983549440">HumanDriving</a><br />
<a href="https://twitter.com/vdignum/status/1127134470291775489"><br /></a>
Milecia Matthews, Girish Chowdhary and Emily Kieson, <a href="https://arxiv.org/abs/1708.07123">Intent Communication between Autonomous Vehicles and Pedestrians</a> (2017) </p><p>Eric A Taub, <a href="https://www.nytimes.com/2019/08/01/business/self-driving-cars-jaywalking.html">How Jaywalking Could Jam Up the Era of Self-Driving Cars</a> (New York Times, 1 August 2019)<br />
<a href="https://twitter.com/vdignum/status/1127134470291775489"><br /></a>
Wikipedia: <a href="https://en.wikipedia.org/wiki/Trolley_problem">Trolley Problem</a> <br />
<br />
<br />
Related posts<br />
<br />
<a href="https://rvsoapbox.blogspot.com/2006/11/for-whom.htm">For Whom</a> (November 2006), <a href="https://rvsoftware.blogspot.com/2015/10/defeating-device-paradigm.html">Defeating the Device Paradigm</a> (October 2015), <a href="https://rvsoftware.blogspot.com/2019/05/towards-chatbot-ethics.html">Towards Chatbot Ethics - Whom Does the Chatbot Serve?</a> (May 2019), <a href="https://rvsoapbox.blogspot.com/2019/05/the-ethics-of-interoperability.html">Ethics of Interoperability</a> (May 2019), <a href="https://rvsoapbox.blogspot.com/2019/06/the-road-less-travelled.html">The Road Less Travelled - Whom Does the Algorithm Serve?</a> (June 2019), <a href="https://demandingchange.blogspot.com/2019/11/jaywalking.html">Jaywalking</a> (November 2019)<br />
<br /></p>Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0tag:blogger.com,1999:blog-7415430.post-85042612052054852212019-05-05T14:49:00.003+01:002023-04-08T13:51:40.613+01:00Towards Chatbot EthicsWhen over-enthusiastic articles describe chatbotics as the Holy Grail (for digital marketing or online retail or whatever), I should normally ignore this as the usual hyperbole. But in this case, I'm going to take it literally. Let me explain.<br />
<br />
As followers of the Parsifal legend will know, at a critical point in the story Parsifal fails to ask the one question that matters: <i><b>"Whom does the Grail serve?"</b></i><br />
<br />
And anyone who wishes to hype chatbots as some kind of "holy grail" must also ask the same question: <i><b>"Whom does the Chatbot serve?"</b> </i>IBM puts this at the top of its list of ethical questions for chatbots, as does @ashevat (formerly with Slack).<br />
<br />
To the extent that a chatbot is providing information and advice, it is subject to many of the same ethical considerations as any other information source - is the information complete, truthful and unbiased, or does it serve the information provider's commercial interest? Perhaps the chatbot (or rather its owner) is getting a commission if you eat at the recommended restaurant, just as hotel concierges have always done. A restaurant review in an online or traditional newspaper may appear to be independent, but restaurants have many ways of rewarding favourable reviews even without cash changing hands. You might think it is ethical for this to be transparent.<br />
<br />
But an important difference between a chatbot and a newspaper article is that the chatbot has a greater ability to respond to the particular concerns and vulnerabilities of the user. Shiva Bhaska discusses how this power can be used for manipulation and even intimidation. And making sure the user knows that they are talking to a bot rather than a human does not guard against an emotional reaction: Joseph Weizenbaum was one of the first in the modern era to recognize this.<br />
<br />
One area where particularly careful ethical scrutiny is required is the use of chatbots for mental health support. Obviously there are concerns about efficacy and safety as well as privacy, and such systems need to undergo clinical trials for efficacy and potential adverse outcomes, just like any other medical intervention. Kira Kretzschmar et al argue that it is also essential that these platforms are specifically programmed to discourage over-reliance, and that users are encouraged to seek human support in the case of an emergency.<br />
<br />
<br />
Another ethical problem with chatbots is related to the Weasley doctrine (named after Arthur Weasley in <i>Harry Potter and the Chamber of Secrets</i>): <br />
<blockquote class="tr_bq">
"Never trust anything that can think for itself if you can't see where it keeps its brain." </blockquote>
Many people have installed these curious cylindrical devices in their homes, but is that where the intelligence is actually located? When a private conversation was accidentally transmitted from Portland to Seattle, engineers at Amazon were able to inspect the logs, coming up with a somewhat implausible explanation as to how this might have occurred. Obviously this implies a lack of boundaries between the device and the manufacturer. And as @geoffreyfowler reports, chatbots don't only send recordings of your voice back to Master Control, they also send status reports from all your other connected devices.<br />
<br />
Smart home, huh? Smart for whom? Transparency for whom? Or to put it another way, whom does the chatbot serve?<br />
<br />
<br />
<hr />
<br />
<br />
Shiva Bhaskar, <a href="https://medium.com/@shivagbhaskar/the-chatbots-that-will-manipulate-us-2e8b2b3ba804">The Chatbots That Will Manipulate Us</a> (30 June 2017)<br />
<br />
Geoffrey A. Fowler, <a href="https://www.washingtonpost.com/technology/2019/05/06/alexa-has-been-eavesdropping-you-this-whole-time/?utm_term=.48e4a0621e53">Alexa has been eavesdropping on you this whole time</a> (Washington Post, 6 May 2019) HT@<a href="https://twitter.com/hypervisible/status/1125398133238841345">hypervisible</a><br />
<br />
Sidney Fussell, <a href="https://www.theatlantic.com/technology/archive/2019/04/amazon-workers-eavesdrop-amazon-echo-clips/587110/">Behind Every Robot Is a Human</a> (The Atlantic, 15 April 2019)<br />
<br />
Tim Harford, <a href="https://www.bbc.co.uk/news/business-49344596">Can a computer fool you into thinking it is human?</a> (BBC 25 September 2019)<br />
<a href="https://twitter.com/hypervisible/status/1125398133238841345"><br /></a>
Gary Horcher, <a href="https://www.kiro7.com/news/local/woman-says-her-amazon-device-recorded-private-conversation-sent-it-out-to-random-contact/755507974">Woman says her Amazon device recorded private conversation, sent it out to random contact</a> (25 May 2018)<br />
<br />
Kira Kretzschmar et al, <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6402067/">Can Your Phone Be Your Therapist? Young People’s Ethical Perspectives on the Use of Fully Automated Conversational Agents (Chatbots) in Mental Health Support</a> (Biomed Inform Insights, 11, 5 March 2019)
<br />
<br />
Trips Reddy, <a href="https://www.ibm.com/blogs/watson/2017/10/the-code-of-ethics-for-ai-and-chatbots-that-every-brand-should-follow/">The code of ethics for AI and chatbots that every brand should follow</a> (IBM 15 October 15, 2017)
<br />
<br />
Amir Shevat, <a href="https://medium.com/slack-developer-blog/hard-questions-about-bot-ethics-4f80797e34f0">Hard questions about bot ethics</a> (Slack Platform Blog, 12 October 2016)<br />
<br />
Tom Warren, <a href="https://www.theverge.com/2018/5/24/17391898/amazon-alexa-private-conversation-recording-explanation">Amazon explains how Alexa recorded a private conversation and sent it to another user</a> (The Verge, 24 May 2018)<br />
<br />
Joseph Weizenbaum, Computer Power and Human Reason (WH Freeman, 1976)<br />
<br />
<br />
Related posts: <a href="https://rvsoftware.blogspot.com/2015/06/understanding-value-chain-of-internet.html">Understanding the Value Chain of the Internet of Things</a> (June 2015), <a href="https://rvsoftware.blogspot.com/2019/05/whom-does-technology-serve.html">Whom does the technology serve?</a> (May 2019), <a href="https://rvsoapbox.blogspot.com/2019/06/the-road-less-travelled.html">The Road Less Travelled</a> (June 2019), <a href="https://rvsoftware.blogspot.com/2019/12/the-allure-of-smart-home.html">The Allure of the Smart Home</a> (December 2019), <a href="https://rvsoftware.blogspot.com/2021/12/the-sad-reality-of-chatbotics.html">The Sad Reality of Chatbotics</a> (December 2021)<br />
<br />
<span style="font-size: xx-small;">updated 4 October 2019</span> Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0tag:blogger.com,1999:blog-7415430.post-23649508983956515122019-03-07T20:20:00.003+00:002022-03-10T20:21:53.580+00:00Affective ComputingAt #NYTnewwork in February 2019, @<a href="https://twitter.com/kaliouby/status/1103406460380286976">Rana el-Kaliouby</a> asked <q>What if doctors could objectively measure your mental state?</q> Dr el-Kaliouby is one of the pioneers of affective computing,
and is founder of a company called Affectiva. Some of her early work was
building apps that helped autistic people to read expressions. She now argues that <q>artificial emotional intelligence is key to building reciprocal trust between humans and AI</q>. <br />
<br />
Affectiva competes with some of the big tech companies (including Amazon, IBM and Microsoft), which now offer
<q>emotional analysis</q> or <q>sentiment analysis</q> alongside facial
recognition. <br />
<br />
One proposed use of this technology is in the classroom. The idea is
to install a webcam in the classroom: the system watches the students,
monitors their emotional state, and gives feedback to the teacher in
order to maximize student engagement. (For example, Mark Lieberman reports a university trial in Minnesota, based on the Microsoft system. Lieberman includes some sceptical voices in his report, and the trial is discussed further in the 2018 AI Now report.)<br />
<br />
So how do such systems work? The computer is trained to recognize a <q>happy</q> face by being shown large numbers of images of happy faces. This depends on a team of human
coders labelling the images.<br />
<br />
And this coding generally relies on a <q>classical</q> theory of emotions. Much of this work
is credited to a research psychologist called Paul Ekman, who developed a
Facial Action Coding System (FACS). Most of these
programs use a version called EMFACS, which detects six or seven supposedly universal emotions: anger, contempt, disgust, fear, happiness,
sadness and
surprise. The idea is that because these emotions are <q>hardwired</q>, they can be detected by observing facial muscle movements.<br />
<br />
Lisa Feldman Barrett, one of the leading critics of the classical
theory, argues that emotions are more complicated, and are a product of
one's upbringing and environment. <q>Emotions are real, but not in the objective sense that molecules or
neurons are real. They are real in the same sense that money is real –
that is, hardly an illusion, but a product of human agreement.</q><br />
<br />
It has also been observed that people from different parts of the
world, or from different ethnic groups, express emotions differently.
(Who knew?) Algorithms that fail to deal with ethnic diversity may be grossly inaccurate and set
people up for racial discrimination. For example, in a recent study of
two facial recognition software products, one product consistently
interpreted black sportsmen as angrier than white sportsmen, while the
other labelled the black subjects as contemptuous.<br />
<br />
But Affectiva prides itself on dealing with ethnic diversity. When Rana el-Kaliouby spoke to Oscar Schwartz recently, while acknowledging that the technology is not foolproof, she insisted on the importance of collecting <q>diverse data sets</q> in order to compile <q>ethnically based benchmarks ... codified
assumptions about how an emotion is expressed within different ethnic
cultures</q>. In her most recent video, she also insisted on the importance of diversity of the team building these systems.<br />
<br />
Shoshana Zuboff describes sentiment analysis as yet another example
of the behavioural surplus that helps Big Tech accumulate what she calls
surveillance capital.<br />
<blockquote class="tr_bq"><div>
<q>Your unconscious - where feelings form before
there are words to express them - must be recast as simply one more
sources of raw-material supply for machine rendition and analysis, all
of it for the sake of more-perfect prediction. ... This complex of
machine intelligence is trained to isolate, capture, and render the most
subtle and intimate behaviors, from an inadvertent blink to a jaw that
slackens in surprise for a fraction of a second.</q><br /></div><div style="text-align: right;"><cite>Zuboff 2019, pp
282-3</cite></div></blockquote>
Zuboff relies heavily on a long interview with el-Kaliouby in the New
Yorker in 2015, where she expressed optimism about the potential of this
technology, not only to read emotions but to affect them.<br />
<blockquote class="tr_bq">
<q>I do believe that if we have information about your emotional
experiences we can help you be in a more positive mood and influence
your wellness.</q> </blockquote>
In her talk last month, without explicitly mentioning Zuboff's book, el-Kaliouby put a strong emphasis on the ethical values of Affectiva, explaining that they have turned down offers of funding from security, surveillance and lie detection, to concentrate on such areas as safety and mental health. I wonder if IBM ("Principles for the Cognitive Era") and Microsoft ("The Future Computed: Artificial Intelligence and its Role in Society") will take the same position?<br />
<br />
HT @scarschwartz @raffiwriter <br />
<br />
<hr /><p>
<br />
<a href="https://ainowinstitute.org/AI_Now_2018_Report.pdf">AI Now Report 2018</a> (AI Now Institute, December 2018)</p>
<p>Bernd Bösel and Serjoscha Wiemer (eds), <a href="https://meson.press/books/affective-transformations/">Affective Transformations: Politics—Algorithms—Media</a> (Meson Press, 2020)<br />
<br />
Hannah Devlin, <a href="https://www.theguardian.com/technology/2020/feb/16/ai-systems-claiming-to-read-emotions-pose-discrimination-risks">AI systems claiming to 'read' emotions pose discrimination risks</a> (Guardian,16 February 2020)<br />
<br />
Rana el-Kaliouby, <a href="https://youtu.be/iQxS9Eq-hms">Teaching Machines to Feel</a> (Bloomberg via YouTube, 20 Sep 2017), <a href="https://youtu.be/8n_tv-BYqM0">Emotional Intelligence</a> (New York Times via YouTube, <span class="date style-scope ytd-video-secondary-info-renderer">6 Mar 2019)</span><br />
<br />
Lisa Feldman Barrett, <a href="https://www.affective-science.org/pubs/2013/psych-construction-darwinian.pdf">Psychological Construction: The Darwinian Approach to the Science of Emotion</a> (Emotion Review Vol. 5, No. 4, October 2013) pp 379 –389</p><p>Douglas Heaven, <a href="https://www.nature.com/articles/d41586-020-00507-5">Why faces don't always tell the truth about feelings</a> (Nature, 26 February 2020)<br />
<br />
Raffi Khatchadourian, <a href="http://www.newyorker.com/magazine/2015/01/09/know-feel">We Know How You Feel</a> (New Yorker, 19 January 2015)<br />
<br />
Mark Lieberman, <a href="https://www.insidehighered.com/digital-learning/article/2018/02/20/sentiment-analysis-allows-instructors-shape-course-content">Sentiment Analysis Allows Instructors to Shape Course Content around Students’ Emotions</a>, Inside Higher Education , February 20, 2018, <br />
<br />
Lauren Rhue, <a href="https://ssrn.com/abstract=3281765"> Racial Influence on Automated Perceptions of Emotions</a> (November 9, 2018) <a class="textlink" href="https://dx.doi.org/10.2139/ssrn.3281765" target="_blank">http://dx.doi.org/10.2139/ssrn.3281765 </a>
<br />
<br />
Oscar Schwartz, <a href="https://www.theguardian.com/technology/2019/mar/06/facial-recognition-software-emotional-science">Don’t look now: why you should be worried about machines reading your emotions</a> (The Guardian, 6 Mar 2019)
<br />
<br />
Shoshana Zuboff, The Age of Surveillance Capitalism (UK Edition: Profile Books, 2019)
<br />
<br />
Wikipedia: <a href="https://en.wikipedia.org/wiki/Facial_Action_Coding_System">Facial Action Coding System</a> <br />
<br />
Related posts: <a href="https://demandingchange.blogspot.com/2009/09/linking-facial-expressions.html">Linking Facial Expressions</a> (September 2009), <a href="https://rvsoapbox.blogspot.com/2018/06/data-and-intelligence-principles-from.html">Data and Intelligence Principles from Major Players</a> (June 2018), <a href="https://demandingchange.blogspot.com/2019/02/shoshana-zuboff-on-surveillance.html">Shoshana Zuboff on Surveillance Capitalism</a> (February 2019), <a href="https://rvsoftware.blogspot.com/2019/06/listening-for-trouble.html">Listening for Trouble</a> (June 2019)<br />
<br />
<br />
<span style="font-size: xx-small;">Links added February 2020</span> </p>Anonymoushttp://www.blogger.com/profile/11805047120034346106noreply@blogger.com0tag:blogger.com,1999:blog-7415430.post-17471538150013981872019-02-24T13:47:00.000+00:002019-12-29T10:41:35.460+00:00Hidden FunctionalityConsumer surveillance was in the news again this week. Apparently Google forgot to tell consumers that there was a <strike>cuckoo</strike> microphone in the Nest.<br />
<br />
So what's new? A few years ago, people were getting worried about a microphone inside the
Samsung Smart TV that would eavesdrop your conversations. (HT @<a href="https://twitter.com/xor/status/564356757007261696/photo/1">Parker Higgins</a>)<br />
<br />
But at least in those cases we think we know which corporation is
responsible. In other cases, this may not be so clear-cut. For example,
who decided to install a camera into the seat-back entertainment systems used by several airlines? <br />
<br />
And there is a much more general problem here. It is usually cheaper to use general-purpose hardware than to design special
purpose hardware. For this reason, most IoT devices have far more processing power and
functionality than they strictly need. This extra functionality carries two dangers. Firstly, <strike>if</strike> when the device is hacked, the functionality can be
coopted for covert or malicious purposes. (For example IoT devices with
weak or non-existent security can be recruited into a global botnet.)
Secondly, sooner or later someone will think of a justification for
switching the functionality on. (In the case of the Nest microphone Google already did, which is what alerted people to the microphone's existence.)<br />
<br />
So who is responsible for the failure of a component to act properly,
who is responsible for the limitation of purpose, and how can this
responsibility be transparently enforced?<br />
<br />
Some US politicians have started talking about a technology version of "food labelling" - so that people can avoid products and services if they are sensitive to a particular "ingredient". With physical products, this information would presumably be added to the safety leaflet that you find in the box whenever you buy anything electrical. With online services, this information should be included in the Privacy Notice, which again nobody reads. (There are various estimates about the number of weeks it would take you to read all these notices.) So clearly it is unreasonable to expect the consumer to police this kind of thing.<br />
<br />
Just as the supermarkets have a "free from" aisle where they sell all the overpriced gluten-free food, perhaps we can ask electronics retailers to have a "connectivity-free" section, where the products can be guaranteed safe from Ray Ozzie's latest initiative, which is to build devices that
connect automatically by default, rather than wait for the user to
switch the connectivity on. (Hasn't he heard of privacy and security by
default?)<br />
<br />
And of course high-tech functionality is no longer limited to products that are obviously electrical. The RFID tags in your clothes may not always be deactivated when you leave the store. And for other examples of SmartClothing, check out my posts on <a href="https://rvsoftware.blogspot.com/search/label/wearable%20tech">Wearable Tech</a>.<br />
<br />
<br />
<hr />
<br />
Nick Bastone, <a href="https://www.businessinsider.com/nest-microphone-was-never-supposed-to-be-a-secret-2019-2">Google says the built-in microphone it never told Nest users about was 'never supposed to be a secret'</a> (Business Insider, 19 February 2019)<br />
<br />
Nick Bastone, <a href="https://www.businessinsider.com/democratic-presidential-candidates-speak-out-google-nest-microphone-2019-2">Democratic presidential candidates are tearing into Google for the hidden Nest microphone, and calling for tech gadget 'ingredients' labels</a> (Business Insider, 21 February 2019)
<br />
<br />
Ina Fried, <a href="https://www.axios.com/exclusive-ray-ozzie-wants-to-wirelessly-connect-the-world-9b746722-76f1-4743-91bb-692261cdce7b.html">Exclusive: Ray Ozzie wants to wirelessly connect the world</a> (Axios, 22 February 2019)<br />
<br />
Melissa Locker, <a href="https://www.fastcompany.com/90310098/someone-found-cameras-in-singapore-airlines-in-flight-entertainment-system">Someone found cameras in Singapore Airlines’ in-flight entertainment system</a> (Fast Company, 20 February 2019)<br />
<br />
Ben Schoon, <a href="https://9to5google.com/2019/02/04/nest-secure-google-assistant/">Nest Secure can now be turned into another Google Assistant speaker for your home</a> (9 to 5 Google, 4 February 2019) <br />
<br />
Related posts: <a href="https://rvsoftware.blogspot.com/2014/12/have-you-got-big-data-in-your-underwear.html">Have you got Big Data in your Underwear?</a> (December 2014), <a href="https://rvsoftware.blogspot.com/2015/11/towards-internet-of-underthings.html">Towards the Internet of Underthings</a> (November 2015), <a href="https://rvsoftware.blogspot.com/2017/11/pax-technica-on-risk-and-security.html">Pax Technica - On Risk and Security</a> (November 2017), <a href="https://rvsoapbox.blogspot.com/2018/06/outdated-assumptions-connectivity-hunger.html">Outdated Assumptions - Connectivity Hunger</a> (June 2018), <a href="https://demandingchange.blogspot.com/2019/02/shoshana-zuboff-on-surveillance.html">Shoshana Zuboff on Surveillance Capitalism</a> (February 2019)Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0tag:blogger.com,1999:blog-7415430.post-45853777547986664492018-04-02T10:47:00.000+01:002018-04-02T10:47:03.501+01:00Blockchain and the Edge of Obfuscation - PrivacyAccording to Wikipedia,<br />
<blockquote class="tr_bq">
a <b>blockchain</b> is a decentralized, distributed and public digital ledger that is used to record transactions across many computers so that the record cannot be altered retroactively without the alteration of all subsequent blocks and the collusion of the network. (<a href="https://en.wikipedia.org/w/index.php?title=Blockchain&oldid=833405189">Wikipedia, retrieved 31 March 2018</a>)</blockquote>
<br />
Some people are concerned that the essential architecture of blockchain conflicts with the requirements of privacy, especially as represented by the EU General Data Protection Regulation (GDPR), which comes into force on 25th May 2018. In particular, it is not obvious how an immutable blockchain can cope with the requirement to allow data subjects to amend and erase personal data.<br />
<br />
<br />
Optimists have suggested a number of compromises.<br />
<br />
<b>Firstly</b>, the data may be divided between the Blockchain and another data store, known as the Offchain. If the personal data isn't actually held on the blockchain, then it's easier to amend and delete.<br />
<br />
<b>Secondly</b>, the underlying meaning of the information can be "completely obfuscated". Researchers at MIT are inventing a 21st century Enigma machine, which will store "secret contracts" instead of the normal "smart contracts".<br />
<br />
<ul><span style="font-size: x-small;">
Historical note: In the English-speaking world, Alan Turing is often credited with cracking the original Enigma machine, but it was Polish mathematicians who cracked it first.
</span></ul>
<br />
<b>Thirdly</b>, there may be some wriggle-room in how the word "erasure" is interpreted. Irish entrepreneur Shane Brett thinks that this term may be transposed differently in different EU member states. (This sounds like a recipe for bureaucratic confusion.) It has been suggested that personal data could be "blacklisted" rather than actually deleted.<br />
<br />
<b>Finally</b>, as reported by David Meyer, blockchain experts can just argue that GDPR is "already out of date" and hope regulators won't be too "stubborn" to "adjust" the regulation.<br />
<br />
<br />
But the problem with these compromises is that once you dilute the pure blockchain concept, some of the supposed benefits of blockchain evaporate, and it just becomes another (resource-hungry) data store. Perhaps it is blockchain that is "already out of date".<br />
<br />
<hr />
<br />
Vitalik Buterin, <a href="https://blog.ethereum.org/2016/01/15/privacy-on-the-blockchain/">Privacy on the Blockchain</a> (Ethereum Blog, 15 January 2016)<br />
<br />
Michèle Finck, <a href="https://www.law.ox.ac.uk/business-law-blog/blog/2018/02/blockchains-and-gdpr">Blockchains and the GDPR</a> (Oxford Business Law Blog, 13 February 2018)<br />
<br />
Josh Hall, <a href="https://www.theguardian.com/commentisfree/2018/mar/21/blockchain-privacy-data-protection-cambridge-analytica">How Blockchain could help us take back control of our privacy</a> (The Guardian, 21 March 2018)
<br />
<br />
David Meyer, <a href="https://iapp.org/news/a/blockchain-technology-is-on-a-collision-course-with-eu-privacy-law/">Blockchain is on a collision course with EU privacy law</a> (IAPP, 27 February 2018) via <a href="https://thenextweb.com/syndication/2018/03/26/blockchain-collision-course-eu-privacy-law/">The Next Web</a><br />
<br />
Dean Steinbeck, <a href="https://cointelegraph.com/news/how-new-eu-privacy-laws-will-impact-blockchain-expert-take">How New EU Privacy Laws Will Impact Blockchain</a> (Coin Telegraph, 30 March 2018)<br />
<br />
Wikipedia: <a href="https://en.wikipedia.org/w/index.php?title=Blockchain&oldid=833405189">Blockchain</a>, <a href="https://en.wikipedia.org/wiki/Enigma_machine">Enigma machine</a><br />
<br />
<br />Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0tag:blogger.com,1999:blog-7415430.post-73394674629256141842018-01-09T22:46:00.000+00:002018-01-11T14:26:44.967+00:00Blockchain and the Edge of Disruption - KodakShares in Eastman Kodak more than doubled today following the announcement of the Kodakcoin, "a photocentric cryptocurrency to empower photographers and agencies to take greater control in image rights management".<br />
<br />
As Andrew Hill points out, blockchain enthusiasts have often mentioned rights management as one of the more promising applications of digital ledger technology. @willms_ listed half a dozen initiatives back in August 2016, and blockchain investor @alextapscott had a piece about it in the Harvard Business Review last year.<br />
<br />
In recent years, Kodak has been held up (probably unfairly) as an example of a company that didn't understand digital. Perhaps to rub this message home, today's story in the Verge is illustrated with stock footage of analogue film. But the bounce in the share price indicates that many investors are willing to give Kodak another chance to prove its digital mettle.<br />
<br />
However, some commentators are cynical.<br />
<br />
<blockquote class="twitter-tweet" data-lang="en">
<div dir="ltr" lang="en">
The shambling remains of Kodak are running an ICO to start a site that's Shutterstock but web-searching for usages. And, for absolutely no reason whatsoever, a blockchain and an ICO token. Legalities presumably in order - accredited investors only. <a href="https://t.co/zFL1nftRxa">https://t.co/zFL1nftRxa</a></div>
— David Gerard (@davidgerard) <a href="https://twitter.com/davidgerard/status/950847586830823425?ref_src=twsrc%5Etfw">January 9, 2018</a></blockquote>
<script async="" charset="utf-8" src="https://platform.twitter.com/widgets.js"></script>
<br />
<br />
The point of blockchain is to support distributed trust. But the rights management service provided by Kodak doesn't rely on distributed trust, it relies entirely on Kodak. If you trust Kodak, you don't need the blockchain to validate a Kodak-operated service; and if you don't trust Kodak, you probably won't be using the service anyway. So what's the point of blockchain in this example?<br />
<br />
<hr />
<br />
<br />
Chloe Cornish, <a href="https://www.ft.com/content/8c650b4c-f564-11e7-8715-e94187b3017e">Kodak pivot to blockchain sends shares flying</a> (FT, 9 January 2018)
<br />
<br />
Chris Foxx and Leo Kelion, <a href="http://www.bbc.co.uk/news/technology-42630136">CES 2018: Kodak soars on KodakCoin and Bitcoin mining plans</a> (BBC News, 9 January 2018)
<br />
<br />
David Gerard, <a href="https://davidgerard.co.uk/blockchain/2018/01/10/kodaks-ico-for-a-stock-photo-site-that-doesnt-exist-yet-but-the-stock-price/">Kodak’s ICO for a stock photo site that doesn’t exist yet. But the stock price!</a> (10 January 2018)<br />
<br />
Jeremy Herron, <a href="https://www.bloomberg.com/news/articles/2018-01-09/kodak-stock-surges-after-announcing-coin-to-join-crypto-craze">Kodak Surges After Announcing Plans to Launch Cryptocurrency Called 'Kodakcoin' </a>(Bloomberg, 9 January 2018)<br />
<br />
Andrew Hill, <a href="https://www.ft.com/content/c1aae34c-f566-11e7-88f7-5465a6ce1a00">Kodak’s convenient click into the blockchain</a> (FT, 9 January 2018)
<br />
<br />
Shannon Liao, <a href="https://www.theverge.com/2018/1/9/16869998/kodak-kodakcoin-blockchain-platform-ethereum-ledger-stock-price">Kodak announces its own cryptocurrency and watches stock price skyrocket</a> (The Verge, 9 January 2018)
<br />
<br />
Willy Shih, <a href="https://sloanreview.mit.edu/article/the-real-lessons-from-kodaks-decline/">The Real Lessons From Kodak’s Decline</a> (Sloan Management Review, Summer 2016)
<br />
<br />
Don Tapscott and Alex Tapscott, <a href="https://hbr.org/2017/03/blockchain-could-help-artists-profit-more-from-their-creative-works">Blockchain Could Help Artists Profit More from Their Creative Works</a> (HBR, 22 March 2017)<br />
<br />
Jessie Willms, <a href="https://bitcoinmagazine.com/articles/is-blockchain-powered-copyright-protection-possible-1470758430/">Is Blockchain-Powered Copyright Protection Possible?</a> (Bitcoin Magazine, 9 August 2016)<br />
<br />
<br />
<br />
Related posts<br />
<br />
<a href="https://rvsoftware.blogspot.co.uk/2017/09/blockchain-and-edge-of-disruption-brexit.html">Blockchain and the Edge of Disruption - Brexit</a> (September 2017)<br />
<a href="http://rvsoftware.blogspot.com/2017/09/blockchain-and-edge-of-disruption-fake.html">Blockchain and the Edge of Disruption - Fake News</a> (September 2017)Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0tag:blogger.com,1999:blog-7415430.post-36835223499264202292017-12-03T21:28:00.000+00:002021-04-27T09:05:39.511+01:00IOT is coming to town<h2>
You better watch out</h2>
<br />
<iframe allowfullscreen="" frameborder="0" height="270" src="https://www.youtube.com/embed/OHdsIRGq0ZU" width="480"></iframe>
<br />
<a href="https://fil.forbrukerradet.no/wp-content/uploads/2017/10/watchout-rapport-october-2017.pdf">#WatchOut Analysis of smartwatches for children</a> (Norwegian Consumer Council, October 2017). <a href="https://boingboing.net/2017/10/21/remote-listening-device.html">BoingBoing</a> comments that<br />
<blockquote class="tr_bq">
Kids' smart watches are a security/privacy dumpster-fire.</blockquote>
<br />
Charlie Osborne, <a href="http://www.zdnet.com/article/smartwatch-security-fails-to-impress-top-devices-vulnerable-to-cyberattack/">Smartwatch security fails to impress: Top devices vulnerable to cyberattack</a> (ZDNet, 22 July 2015)<br />
<br />
<blockquote class="tr_bq">
A new study into the security of smartwatches found that 100 percent of popular device models contain severe vulnerabilities.</blockquote>
<br />
Matt Hamblen, <a href="https://www.computerworld.com/article/2925311/wearables/as-smartwatches-gain-traction-personal-data-privacy-worries-mount.html">As smartwatches gain traction, personal data privacy worries mount</a> (Computerworld, 22 May 2015)<br />
<blockquote class="tr_bq">
Companies could use wearables to track employees' fitness, or even their whereabouts. </blockquote>
<br />
<br />
<h2>
You better not cry</h2>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blog.affectiva.com/hs-fs/hubfs/Emotion-Chip-Future.png?t=1512418959947&width=1020&height=629&name=Emotion-Chip-Future.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="493" data-original-width="800" height="245" src="https://blog.affectiva.com/hs-fs/hubfs/Emotion-Chip-Future.png?t=1512418959947&width=1020&height=629&name=Emotion-Chip-Future.png" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Source: Affectiva</td></tr>
</tbody></table>
<div>
<br /></div>
<div>
<br /></div>
<div>
Rana el Kaliouby, <a href="http://blog.affectiva.com/the-mood-aware-internet-of-things">The Mood-Aware Internet of Things</a> (Affectiva, 24 July 2015)</div>
<br />
<a href="http://www.aplanforliving.com/6-wearables-to-track-your-emotions/">Six Wearables to Track Your Emotions</a> (A Plan For Living)<br />
<br />
<blockquote class="tr_bq">
Soon it might be just as common to track your emotions with a wearable device as it is to monitor your physical health. </blockquote>
<br />
Anna Umanenko, <a href="https://onix-systems.com/blog/emotion-sensing-technology-in-the-internet-of-things">Emotion-sensing technology in the Internet of Things</a> (Onyx Systems)<br />
<br />
<br />
<h2>
Better not pout</h2>
<div>
<br /></div>
Shaun Moore, <a href="https://medium.com/iotforall/fooling-facial-recognition-7a088be92554">Fooling Facial Recognition</a> (Medium, 26 October 2017)<br />
<br />
Mingzhe Jiang et al, <a href="https://www.researchgate.net/publication/291832754_IoT-based_Remote_Facial_Expression_Monitoring_System_with_sEMG_Signal">IoT-based Remote Facial Expression Monitoring System with sEMG Signal</a> (IEEE 2016)<br />
<br />
<blockquote class="tr_bq">
Facial expression recognition is studied across several fields such as human emotional intelligence in human-computer interaction to help improving machine intelligence, patient monitoring and diagnosis in clinical treatment. </blockquote>
<br />
<br />
<h2>
I'm telling you why</h2>
<br />
Maria Korolov, <a href="https://www.csoonline.com/article/3142484/internet-of-things/report-surveillance-cameras-most-dangerous-iot-devices-in-enterprise.html">Report: Surveillance cameras most dangerous IoT devices in enterprise</a> (CSO, 17 November 2016)
<br />
<blockquote class="tr_bq">
<br />
Networked security cameras are the most likely to have vulnerabilities. </blockquote>
<br />
Leor Grebler, <a href="https://medium.com/iotforall/why-do-iot-devices-die-e4df0c7a075d">Why do IOT devices die</a> (Medium, 3 December 2017)<br />
<br />
<h2>
IOT is coming to town</h2>
<div>
<br /></div>
<div>
Nick Ismail, <a href="http://www.information-age.com/iot-developing-smart-cities-123463276/">The role of the Internet of Things in developing Smart Cities</a> (Information Age, 18 November 2016)</div>
<div>
<br />
<br /></div>
<h2>
It's making a list
And checking it twice</h2>
<br />
Daan Pepijn, <a href="https://thenextweb.com/contributors/2017/09/21/blockchain-tech-missing-link-success-iot/">Is blockchain tech the missing link for the success of IoT?</a> (TNW, 21 September 2017)<br />
<br />
<br />
<br />
<h2>
Gonna find out Who's naughty and nice</h2>
<br />
<a href="https://www.cybersecurityintelligence.com/blog/police-using-iot-to-detect-crime-2194.html">Police Using IoT To Detect Crime</a> (Cyber Security Intelligence, 14 Feb 2017)<br />
<br />
James Pallister, <a href="https://www.designcouncil.org.uk/news-opinion/will-internet-things-set-family-life-back-100-years">Will the Internet of Things set family life back 100 years?</a> (Design Council, 3 September 2015)<br />
<br />
<br />
<h2>
It sees you when you're sleeping
It knows when you're awake</h2>
<br />
<blockquote class="tr_bq">
But don't just monitor your sleep. Understand it. The Sense app gives you instant access to everything you could want to know about your sleep. View a detailed breakdown of your sleep cycles, see what happened during your night, discover trends in your sleep quality, and more. (<a href="https://hello.is/">Hello</a>)
</blockquote>
<br />
Octav G, <a href="https://www.sammobile.com/2015/09/02/samsungs-sleepsense-is-an-iot-enabled-sleep-tracker/">Samsung’s SLEEPsense is an IoT-enabled sleep tracker</a> (SAM Mobile, 2 September 2015)
<br />
<br />
<br />
<br />
<h2>
It knows if you've been bad or good
So be good for goodness sake!
</h2>
<div>
<br /></div>
<a href="https://www.theguardian.com/technology/2016/feb/09/internet-of-things-smart-home-devices-government-surveillance-james-clapper">US intelligence chief: we might use the internet of things to spy on you</a> (The Guardian, 9 Feb 2015)<br />
<br />
Ben Rossi, <a href="http://www.information-age.com/iot-and-free-will-how-artificial-intelligence-will-trigger-new-nanny-state-123461568/">IoT and free will: how artificial intelligence will trigger a new nanny state</a> (Information Age, 7 June 2016)<br />
<br />
<br />
<br />
<hr />
<br />
<b>Twitter Version
</b><br />
<br />
<blockquote class="twitter-tweet" data-lang="en">
<div dir="ltr" lang="en">
Thread on <a href="https://twitter.com/hashtag/Privacy?src=hash&ref_src=twsrc%5Etfw">#Privacy</a> and the <a href="https://twitter.com/hashtag/InternetOfThings?src=hash&ref_src=twsrc%5Etfw">#InternetOfThings</a></div>
— Richard Veryard (@richardveryard) <a href="https://twitter.com/richardveryard/status/937777101058527232?ref_src=twsrc%5Etfw">December 4, 2017</a></blockquote>
<script async="" charset="utf-8" src="https://platform.twitter.com/widgets.js"></script><br />
<b>Related Posts</b><br />
<br />
<a href="https://rvsoftware.blogspot.co.uk/2017/11/pax-technica.html">Pax Technica - The Book</a> (November 2017)<br />
<a href="https://demandingchange.blogspot.co.uk/2017/11/pax-technica-conference.html">Pax Technica - The Conference</a> (November 2017)<br />
<a href="http://rvsoftware.blogspot.com/2017/11/pax-technica-on-risk-and-security.html">Pax Technica - On Risk and Security</a> (November 2017)<br />
<a href="http://rvsoapbox.blogspot.com/2017/12/the-smell-of-data.html">The Smell of Data</a> (December 2017)<br />
<div>
<br />
<span style="font-size: xx-small;">Updated 10 December 2017</span></div>
Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0tag:blogger.com,1999:blog-7415430.post-14895620926948404922017-11-25T13:22:00.002+00:002019-12-29T10:41:35.416+00:00Pax Technica - On Risk and Security<span style="font-size: x-small;">#paxtechnica </span>Some further thoughts arising from the @CRASSHlive conference in Cambridge on <a href="http://www.crassh.cam.ac.uk/events/27490">The Implications of the Internet of Things</a>. (For a comprehensive account, see @<a href="https://twitter.com/LaurieJ/status/933994196108750848">LaurieJ's livenotes</a>.)<br />
<br />
Many people are worried about the security implications of the Internet of Things. The world is being swamped with cheap internet-enabled devices. As the manufacturing costs, size and power consumption of these devices are being driven down, most producers have neither the expertise not the capacity to build any kind of security into them.<br />
<br />
One of the reasons why this problem is increasing is that it is cheaper to use a general-purpose chip than to design a special purpose chip. So most IoT devices have far more processing power and functionality than they strictly need. This extra functionality can be then coopted for covert or malicious purposes. IoT devices may easily be recruited into a global botnet, and devices from some sources may even have been covertly designed for this purpose.<br />
<br />
Sensors are bad enough - baby monitors and sex toys. Additional concerns apply to IoT actuators - devices that can produce physical effects. For example, lightbulbs that can flash (triggering epileptic fits), thermostats that can switch on simultaneously across a city (melting the grid), centrifuges that can spin out of control (attempting to sabotage Iran's nuclear capability).<br />
<br />
Jon Crowcroft proposed that some of this could be addressed in terms of safety and liability. Safety is a useful driver for increased regulation, and insurance companies will be looking for ways to protect themselves and their corporate customers. While driverless cars generate much discussion, similar questions of safety and liability arise from any cars containing significant quantities of new technology. What if the brake algorithm fails? And given the recent history of cheat software by car manufacturers, can we trust the car not to alter the driver logs in order to evade liability for an accident?<br />
<br />
In many cases, the consumer can be persuaded that there are benefits from internet-enabled devices, and these benefits may depend on some level of interoperability between multiple devices. But we aren't equipped to reason about the trade-off between accessibility/usability and security/privacy.<br />
<br />
For comparison's sake, consider a retailer who has to decide whether to place the merchandise in locked glass cases or on open shelves. Open shelves will result in more sales, but also more shoplifting. So the retailer locks up the jewelry but not the pencils or the furniture, and this is based on a common-sense balance of value and risk.<br />
<br />
But with the Internet of Things, people generally don't have a good enough understanding of value and risk to be able to reason intelligently about this kind of trade-off. Philip Howard advises users to appreciate that devices "have an immediate function that is useful to you and an indirect function that is useful to others" (p255). But just knowing this is not enough. True security will only arise when we have the kind of transparency (or visibility or unconcealment) that I referenced in my previous post.<br />
<br />
<br />
<b>Related Posts</b><br />
<br />
<a href="https://rvsoftware.blogspot.co.uk/2015/10/defeating-device-paradigm.html">Defeating the Device Paradigm</a> (October 2015)<br />
<a href="https://rvsoftware.blogspot.co.uk/2017/11/pax-technica.html">Pax Technica - The Book</a> (November 2017)<br />
<a href="https://demandingchange.blogspot.co.uk/2017/11/pax-technica-conference.html">Pax Technica - The Conference</a> (November 2017)<br />
<a href="http://rvsoapbox.blogspot.com/2017/12/the-smell-of-data.html">The Smell of Data</a> (December 2017)<br />
<a href="https://rvsoapbox.blogspot.com/2018/06/outdated-assumptions-connectivity-hunger.html">Outdated Assumptions - Connectivity Hunger</a> (June 2018)<br />
<br />
<br />
<b>References</b><br />
<br />
Cory Doctorow, <a href="https://en.wikisource.org/wiki/The_Coming_War_on_General_Computation">The Coming War on General Computation</a> (2011)<br />
<br />
Carl Herberger, <a href="https://www.helpnetsecurity.com/2016/11/14/exploit-iot-2017/">How hackers will exploit the Internet of Things in 2017</a> (HelpNet Security, 14 November 2016)
<br />
<br />
Philip Howard, Pax Technica: How The Internet of Things May Set Us Free or Lock Us Up (Yale 2015)
<br />
<br />
Laura James, Pax Technica Notes (<a href="https://medium.com/@lbjames/pax-technica-the-implications-of-the-internet-of-things-adbcf7c93558">Session 1</a>, <a href="https://medium.com/@lbjames/pax-technica-morning-panel-92eaa1a77367">Session 2</a>, <a href="https://medium.com/@lbjames/pax-technica-afternoon-panel-80875942a70c">Session 3</a>, <a href="https://medium.com/@lbjames/pax-technica-afternoon-panel-privacy-111805e030c5">Session 4</a>)<br />
<br />
Holly Robbins, <a href="https://medium.com/the-state-of-responsible-internet-of-things-iot/hollyrobbins-6d2f81512242">The Path for Transparency for IoT Technologies</a> (ThingsCon, June 2017)<br />
<br />
Jack Wallen, <a href="http://www.zdnet.com/article/5-nightmarish-attacks-that-show-the-risks-of-iot-security/">Five nightmarish attacks that show the risks of IoT security</a> (ZDNet, 1 June 2017)Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0tag:blogger.com,1999:blog-7415430.post-54943827412897395462017-11-19T13:09:00.000+00:002019-12-29T10:41:35.527+00:00Pax Technica - The BookIn preparation for a @CRASSHlive conference in Cambridge this coming week (<a href="http://www.crassh.cam.ac.uk/events/27490">Pax Technica: The Implications of the Internet of Things</a>), I've been reading Philip Howard's book, subtitled How The Internet of Things May Set Us Free or Lock Us Up.<br />
<br />
I'm going to start my review by quoting Howard's definition of his subject.<br />
<blockquote class="tr_bq">
The "internet of things" consists of human-made objects with small power supplies, embedded sensors, and addresses on the internet. Most of these networked devices are everyday items that are sending and receiving data about their conditions and our behavior. Unlike mobile phones and computers, devices on these networks are not designed for deliberate social interaction, content creation, or cultural consumption. The bulk of these networked devices simply communicate with other devices: coffeemakers, car parts, clothes, and a plethora of other products. This will not be an internet you experience through a browser. Indeed, as the technology develops, many of us will be barely aware that so many objects around us have power, are sensing, and are sending and receiving data. <cite> Howard p xi</cite></blockquote>
IoT experts may quibble with some of the details of this definition, but it broadly makes sense.<br />
<br />
<br />
My first problem with Howard's book is that he doesn't stick to this definition. He talks a lot about devices in general, but most of the time he is talking about other kinds of devices, such as mobile phones and chatbots. The book contains a wealth of reporting on the disruption caused by digital networks. But much of this is not about the internet of things as he defines it, but about social media, big data, fake news and other internet phenemena. These are important topics to be sure, which have been excellently addressed by other sociologists such as Zeynep Tufekci, as well as in the previous CRASSH conference <a href="http://www.crassh.cam.ac.uk/events/27100">Power Switch</a>. But the book claims to be about something different.<br />
<br />
<br />
My second problem with Howard's book is that he doesn't really question the notion of "device". There is a considerable literature on the philosophy of technology going back to Heidegger via Herbert Dreyfus and Albert Borgmann. In his introduction to Heidegger's <i>Question Concerning Technology</i>, William Lovitt summarizes Heidegger's position as follows.<br />
<blockquote class="tr_bq">
In our time, things are not even regarded as objects, because their only important quality has become their readiness for use. Today all things are being swept together into a vast network in which their only meaning lies in their being available to serve some end that will itself also be directed towards getting everything under control. <cite>QT p xxix</cite></blockquote>
<br />
Albert Borgmann introduced the notion of the Device Paradigm to analyse the way <q>technological devices</q> are perceived and consumed in modern society. In many situations, there is a fetish of the <q>device</q>, obscuring the network infrastructure that is required to deliver the affordance or <q>commodity</q> of the device.<br />
<br />
One of the consequences of this is that discussion of the internet of things tends to focus on the <q>things</q> rather than the <q>internet of</q>. At a healthcare event I attended a couple of years ago, various technology companies were exhibiting a range of wearable or implantable devices - some monitoring, some actively intervening. A patient with multiple conditions might be wearing several such devices. But these devices don't and currently cannot communicate with one other (as suggested by Howard's definition quoted above). Instead, as Howard acknowledges is the case for most devices, they are <q>designed to report data back to designers, manufacturers, and third party analysts</q> (p 211) - either directly or via an app on the user's smartphone. So that's basically a hub and spoke network.<br />
<br />
To thrive in the Pax Technica, Howard advises, <q>you can be a more sophisticated user ... you can be a functionally prominent political actor by thoughtfully managing your internet of things</q> (p 254-5). But what would that entail? Holly Robbins talks about a language to unmask the complexity of IoT. In 1986, before I had read any Heidegger or Borgmann, I called this Visibility. Heidegger calls it Unconcealment (Unverborgenheit).<br />
<br />
Borgmann's own approach is based on what he calls focal things and practices. As Wendt argues, the Internet of Things must create meaningful interactions in order to succeed.<br />
<br />
<blockquote class="tr_bq">
... something found in all of us: the need to take an active role in the world, to shape and design things, and to form rituals around activities. This is not to say we can’t do these things with smart objects, but it does underscore the importance of conscious, embodied interaction with things. The Internet of Things will only be successful if products are designed with purpose. <cite>Wendt</cite></blockquote>
<br />
So I'm hoping that these aspects of the Internet of Things will be discussed on Friday ...<br />
<br />
<hr />
<b>Related Posts</b><br />
<b><br /></b>
<br />
<a href="https://rvsoftware.blogspot.co.uk/2015/06/understanding-value-chain-of-internet.html">Understanding the Value Chain of the Internet of Things</a> (June 2015)<br />
<blockquote class="tr_bq">
<span style="font-size: x-small;">Some marketing experts are seeing the Internet of Things as a way of reasserting control over the consumer. </span></blockquote>
<br />
<a href="https://rvsoftware.blogspot.co.uk/2015/10/defeating-device-paradigm.html">Defeating the Device Paradigm</a> (Oct 2015)<br />
<blockquote class="tr_bq">
<span style="font-size: x-small;">The Internet of Things is not a random collection of devices. It is a safety-critical system of systems, and must be understood (and regulated) as such. But it often suits certain commercial interests to focus our attention on the devices and away from the rest of the system. This is related to what Borgmann calls the Device Paradigm. </span></blockquote>
<br />
<a href="https://rvsoftware.blogspot.co.uk/2015/11/towards-internet-of-underthings.html">Towards the Internet of Underthings</a> (Nov 2015)<br />
<blockquote class="tr_bq">
<span style="font-size: x-small;">We are now encouraged to account for everything we do: footsteps, heartbeats, posture. Until recently this kind of micro-attention to oneself was regarded as slightly obsessional, nowadays it seems to be perfectly normal. And of course these data are collected, and sent to the cloud, and turned into someone else's big data. (Good luck with those privacy settings, by the way.)</span></blockquote>
<a href="https://demandingchange.blogspot.co.uk/2017/11/pax-technica-conference.html">Pax Technica - The Conference</a> (November 2017)<br />
<a href="https://rvsoftware.blogspot.co.uk/2017/11/pax-technica-on-risk-and-security.html">Pax Technica - On Risk and Security</a> (November 2017)<br />
<br />
<a href="https://demandingchange.blogspot.com/2019/10/ethics-of-transparency-and-concealment.html">Ethics of Transparency and Concealment</a> (October 2019) <br />
<br />
<hr />
<b>References</b><br />
<br />
<br />
Albert Borgmann, Technology and the Character of Contemporary Life (Chicago, 1984)<br />
<br />
Oliver Christ, <a href="https://www.ajouronline.com/index.php/AJCIS/article/download/2483/1373">Martin Heidegger‘s Notions of World and Technology in the Internet of Things age</a> (Asian Journal of Computer and Information Systems, Volume 03– Issue 02, April 2015)<br />
<br />
Martin Heidegger, The Question Concerning Technology (Harper, 1977), with introduction by William Lovitt<br />
<br />
Philip Howard, Pax Technica: How The Internet of Things May Set Us Free or Lock Us Up (Yale 2015)
<br />
<br />
Holly Robbins, <a href="https://medium.com/the-state-of-responsible-internet-of-things-iot/hollyrobbins-6d2f81512242">The Path for Transparency for IoT Technologies</a> (ThingsCon, June 2017)
<br />
<br />
Zeynep Tufekci, <a href="http://firstmonday.org/ojs/index.php/fm/article/view/4901">Engineering the public: Big data, surveillance and computational politics</a> (First Monday, Volume 19, Number 7, 7 July 2014)
<br />
<br />
Richard Veryard, <a href="http://www.users.globalnet.co.uk/~rxv/tcm/visibility1.pdf">The Role of Visibility in Systems</a> (Human Systems Management 6, 1986)<br />
<br />
Thomas Wendt, <a href="https://uxmag.com/articles/internet-of-things-and-the-work-of-the-hands">Internet of Things and the Work of the Hands</a> (UX Magazine, 12 March 2014)<br />
<br />
Wikipedia: <a href="https://en.wikipedia.org/wiki/Device_paradigm">Device Paradigm</a><br />
<br />Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0tag:blogger.com,1999:blog-7415430.post-15288778577275250162017-09-10T11:00:00.000+01:002018-10-04T18:10:45.308+01:00Blockchain and the Edge of Disruption - BrexitIn August 2017, the British government released a position paper on future customs arrangements with the EU following Brexit. Among other things, the paper suggested that new technology would address some of the challenges of maintaining trade "as frictionless as possible". In his report, the BBC technology correspondent mentioned number plate recognition, artificial intelligence, and of course blockchain. This week I met with a couple of our blockchain experts at Reply to brief me on what blockchain can and can't do to address this challenge.<br />
<br />
First, let's understand the nature of the challenge. When goods cross a customs border, they have to be declared. Some shipments are inspected to check that these declarations are accurate. This process has three objectives.<br />
<ul>
<li>To ensure that the goods don't exceed some import/export quota, and to levy customs duties if necessary</li>
<li>To ensure that the goods satisfy applicable standards and regulations - for example food safety</li>
<li>To identify contraband or counterfeit goods </li>
</ul>
Importers should be able to submit customs declarations electronically before the shipment reaches the border. This should enable customs officials (or algorithms working on their behalf) to select shipments for casual or close inspection, thus reducing delays at the border. <br />
<br />
Note that these processes already exist for goods entering the European single market. But the potential impact of Brexit is a massive increase in the volume of cross-border shipments, and current systems and procedures are not expected to be able to handle these volumes. Goods will be delayed, with implications not only for cost but also the quality of fresh produce. Just-in-time supply chains will be disrupted. <br />
<br />
The primary contribution of blockchain here is establish a robust and watertight data trail for goods. This means that if goods are properly labelled, the blockchain can deliver a complete history. This doesn't remove the need for customs declarations, but under certain conditions it could reduce the need for inspections at the border. For example, instead of being located at the border, a plain clothes customs inspector might visit retail outlets with a hand-held label reader, verifying the blockchain record associated with the label, with the power to seize goods and instigate prosecutions.<br />
<br />
The blockchain can tell you about the provenance of the
item identified on the label, but what's to stop someone switching
labels, reusing old labels or even cloning labels?<br />
<br />
For some goods the stakes are very high. The lost revenue from the smuggling and counterfeiting of cigarettes alone is estimated at €10bn a year. So a Europe-wide system is being implemented to track cigarettes: by May 2019 all tobacco products within the EU are required to be "marked with a unique identifier" and security stamp. So that's just one high-stakes product, with a relatively small number of manufacturers.<br />
<br />
For diamonds, the stakes are even higher. The Kimberly Process Certification Scheme (KPCS) was introduced in 2003 to control trade in "conflict diamonds", but there are many flaws in the scheme. <br />
<blockquote class="tr_bq">
"If a consumer went into almost any jeweller in the UK and asked for the
origin of a diamond on display, staff would most be most unlikely to be
able confirm which country, let alone the mine, it was sourced from." [<a href="https://www.theguardian.com/sustainable-business/diamonds-blood-kimberley-process-mines-ethical">Guardian, March 2014</a>]</blockquote>
<br />
My colleagues briefed me on some interesting innovations they are working on for specific high-value products. One possibility is to inscribe a unique identifier into the product itself. For example, diamonds can be etched with a laser, expensive shoes can have the identifier embedded in the heel. And with 3D printing, it may be possible to manufacture each item with its own unique identifier.<br />
<br />
Another possibility is to create a detailed description of each item. Everledger, which describes itself as a permanent ledger for high-value assets, uses more than 40 features, including colour and clarity, to create a diamond's ID. It is now moving on to other high-value products such as fine wine. In future, such schemes should make it more difficult to pull off the kind of criminal sleight of hand for which Rudy Kurniawan got ten years in prison. <br />
<br />
To prevent cloning, you need more than blockchain. Just as numberplate recognition fails if people can use false numberplates, so blockchain labelling fails if people such as Kurniawan can easily reuse or copy the labels. At a wine auction in 2006, he offered eight magnums of 1947 Château Lafleur. This immediately aroused suspicion, because only five magnums were ever produced. If he had sold each bottle separately, would anyone have noticed? Yes perhaps, if every sale had to be recorded in the blockchain.<br />
<br />
If criminals have access to such technologies as etching and 3D printing, they may be able to create exact copies of labels and products that would appear valid when checked against the blockchain. So to guard against this, the blockchain has to have sufficient visibility of the supply chain to detect any duplicates.<br />
<br />
In other words, to use blockchain properly, it's not enough to maintain a record of the origin of an item. You have to have a complete record of all transactions involving the item, including inspections. This means adding to the blockchain at every link in the supply chain. As the industry body BIFA observes in relation to blockchain generally, <br />
<blockquote class="tr_bq">
"this technology ... can only reap its full benefits if all stakeholders/members of the supply chain make use of the technology and can access it"</blockquote>
<br />
Further difficulties arise where goods are processed. For example, when a large animal or fish is cut up into pieces, to be sold to multiple consumers. Blockchain can be used to check that the total weight of the pieces is consistent with the original weight of the whole, but again this assumes that all the pieces are tracked. However, there is considerable interest in getting this kind of scheme to work effectively for products where sustainability is a major issue, such as tuna.<br />
<br />
Where there are transformation points in the supply chain - such as cutting a rough diamond into jewels or cutting a whole tuna into steaks - these can be subject to special monitoring and certification, and this can itself be written into the blockchain for further reassurance. <br />
<br />
<br />
In summary, my colleagues have convinced me that there are significant opportunities for blockchain in supporting the supply chain for selected high-value or safety-critical products, provided certain assumptions are met. Blockchain is not necessarily the whole solution, but works when appropriately combined with other innovations. <br />
<br />
But even these schemes will take years to get up to speed. We started with the problem of massive increases in the volume
of shipments crossing customs borders. In the examples I've discussed
here, customs facilitation is not the primary motive for introducing
blockchain, but may be an additional benefit. However, it is hard to see a sufficient number of these schemes being operational in time for Brexit, let alone a universal system for all categories of goods.<br />
<br />
<hr />
Postscript (October 2018)<br />
<br />
A white paper <a href="https://www.reply.com/en/Shared%20Documents/Blockchain-for-Brexit.pdf">Blockchain for Brexit</a> (pdf) was published by Reply last year, which included some of the above points. This white paper was mentioned by the Financial Times in October 2018, together with a quote that does not come from the paper.<br />
<br />
<hr />
<br />
Stephen Adams, <a href="https://www.global-counsel.co.uk/blog/brexit-customs-questions">Brexit customs questions</a> (Global Counsel, 16 August 2017)<br />
<br />
Ian Allison, <a href="http://www.ibtimes.co.uk/blockchain-plus-3d-printing-equals-smart-manufacturing-ethereum-you-can-touch-1585747">Blockchain plus 3D printing equals 'smart manufacturing' and Ethereum you can touch</a> (International Business Times, 11 October 2016)<br />
<br />
Aleya Begum, <a href="https://www.gtreview.com/news/europe/uk-outlines-fantasy-post-brexit-customs-position/">UK outlines "fantasy" post-Brexit customs position</a> (GTR Review, 16 August 2017)<br />
<br />
Charles Brett, <a href="https://www.enterprisetimes.co.uk/2017/12/11/might-blockchains-help-crack-brexits-irish-trade-problem/">Might blockchains help crack Brexit’s Irish trade problem?</a> (Enterprise Times, 11 December 2017)
<br />
<br />
British International Freight Association, <a href="http://www.bifa.org/news/articles/2017/feb/blockchain-technology-in-logistics">Blockchain Technology in Logistics</a> (BIFA, Feb 2017)<br />
<br />
John Campbell, <a href="http://www.bbc.co.uk/news/world-europe-42111690">Report suggests 'low friction' Brexit border solution</a> (BBC News, 25 November 2017)<br />
<br />
Rory Cellan-Jones, <a href="http://www.bbc.co.uk/news/technology-40939816">Can tech solve the Brexit border puzzle?</a> (BBC News, 16 August 2017)
<br />
<br />
Chris Grey, <a href="http://chrisgreybrexitblog.blogspot.co.uk/2017/12/why-brexiters-are-flummoxed-by-irish.html">Why Brexiters are flummoxed by the Irish Border</a> (2 December 2017)<br />
<br />
Lars Karlsson, <a href="http://www.europarl.europa.eu/RegData/etudes/STUD/2017/596828/IPOL_STU%282017%29596828_EN.pdf">Smart Border 2.0: Avoiding a hard border on the island of Ireland for Customs control and the free movement of persons</a> (European Parliament, November 2017)<br />
<br />
John Temple Lang, <a href="http://www.europarl.europa.eu/RegData/etudes/STUD/2017/596825/IPOL_STU%282017%29596825_EN.pdf">Brexit and Ireland Legal, Political and Economic Considerations</a> (European Parliament, November 2017)<br />
<br />
Matthew Lesh, <a href="http://brexitcentral.com/blockchain-innovative-solution-brexit-customs/">Blockchain offers an innovative solution to the Brexit customs puzzle</a> (Brexit Central, 17 August 2017)<br />
<br />
Natasha Lomas, <a href="https://techcrunch.com/2015/06/29/everledger/">Everledger Is Using Blockchain To Combat Fraud, Starting With Diamonds</a> (TechCrunch 29 Jun 2015)<br />
<br />
Paul McClean, <a href="https://www.ft.com/content/d9dd2a2a-c2ee-11e6-9bca-2b93a6856354">EU report backs joint effort to trace illicit cigarettes</a> (FT, 22 December 2016)<br />
<br />
Dant McCrum and Jemima Kelly, <a href="https://ftalphaville.ft.com/2018/10/02/1538481491000/Chancellor-s-blockchain-idea-is-a-desperate-scrape-of-the-Brexit-barrel">Chancellor's blockchain idea is a desperate scrape of the Brexit barrel</a> (Financial Times Alphaville, 2 October 2018) - article available after free registration<br />
<br />
Adele Peters, <a href="https://www.fastcompany.com/3063440/tracking-tuna-on-the-blockchain-to-prevent-slavery-and-overfishing">Tracking Tuna On The Blockchain To Prevent Slavery And Overfishing</a> (Fast Company, 8 Sept 2016) <br />
<br />
Jeff John Roberts, <a href="http://fortune.com/2017/09/21/pharma-blockchain/">Big Pharma Turns to Blockchain to Track Meds</a> (Fortune, 21 Sep 2017)<br />
<br />
Gian Volpicelli, <a href="http://www.wired.co.uk/article/blockchain-conflict-diamonds-everledger">How the blockchain is helping stop the spread of conflict diamonds</a> (Wired, 15 February 2017)<br />
<br />
Wikipedia: <a href="https://en.wikipedia.org/wiki/Kimberley_Process_Certification_Scheme">Kimberly Process Certification Scheme</a>, <a href="https://en.wikipedia.org/wiki/Rudy_Kurniawan">Rudy Kurniawan</a><br />
<br />
<br />
Related Posts:<br />
<br />
<a href="https://rvsoapbox.blogspot.co.uk/2016/11/steering-enterprise-of-brexit.html">Steering the Enterprise of Brexit</a> (November 2016)<br />
<a href="http://rvsoftware.blogspot.com/2017/09/blockchain-and-edge-of-disruption-fake.html">Blockchain and the Edge of Disruption - Fake News</a> (September 2017)<br />
<br />
<br />
<br />
<span style="font-size: xx-small;">Updated 4 October 2018</span>Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0tag:blogger.com,1999:blog-7415430.post-74092410902843848402017-09-01T22:17:00.000+01:002017-09-10T13:18:54.956+01:00Blockchain and the Edge of Disruption - Fake NewsIn a world where stability and trust are under threat, blockchain may seem to be a good way of holding the line. In the past month, @omribarzi has written several Forbes articles describing various applications of blockchain technology.<br />
<br />
In this post I want to look at his proposal for addressing the problem of fake news, in which he makes the following claims:<br />
<ul>
<li>Blockchain Tech Seeks to Decentralize News
</li>
<li>Blockchain Tech Can Fix Mainstream Media
</li>
<li>Blockchain Can Also fix Social Media
</li>
<li>Giving Control Back to the Users</li>
</ul>
So let's start with the problem statement. <br />
<blockquote class="tr_bq">
"The biggest issue with news sources in the digital age is verifiability.
... During the last American election, accusations of bias were everywhere, and the public has grown sick of the lack of clear and unbiased journalism."</blockquote>
One of the things that blockchain can do is provide a clear lineage for a given item. If someone presents you with a dodgy story and claims that it comes from a reputable source such as the BBC, you can (if you choose) inspect the blockchain to verify this claim.<br />
<br />
Or you could just look on the BBC website. It is not clear that blockchain is any easier or more reliable than other methods of fact-checking.<br />
<br />
One use case described in the article is that "writers can offer
snippets -- concise summaries of news articles". Blockchain may be able
to verify that an original news article has a reputable source, but how
can Blockchain verify that the summary accurately represents the
original news article or articles? Fake news can sometimes contain true
snippets taken out of context, and juxtaposed with other material to
create a deliberately false impression.<br />
<br />
A lot of recent fake news has been exposed by simple fact check. For
example, the false assertion that President Obama played golf during
Hurricane Katrina is refuted by a simple date check (Obama was not
president during Katrina) or by looking at contemporary news reports. Is there a way that Blockchain could
establish a link from the snippet to the fact-check?<br />
<br />
And the quality of news is not just dependent on identifying the source. The BBC is a reliable source of news for many topics, but in some areas (e.g. climate change, Brexit economics) a dogmatic notion of "balance" results in its giving the same respect to dubious minority opinions as to expert consensus. <br />
<br />
Verifiability is ultimately a question of methodology. Where a news
story is controversial or politically charged, a good journalist or
editor strongly prefers multiple independent sources, and will actively
check the most obvious ways in which the story might be refuted (such as
reverse image search). How is Blockchain going to help here?<br />
<br />
Most of the time, fact-checking is relatively easy if you can be bothered. The reason fake news flourishes is that people can't be bothered. Often they can't even be bothered to read the article or view the video before reposting something, so the "like" is based purely on a seductive headline.<br />
<br />
The article describes a platform called Snip, which will establish a reputation economy, and somehow remain immune to armies of bots. Snip means you never have to read long-form journalism (if you don't want to) and it has a machine learning algorithm "that learns you and your
preferences so the end result is highly relevant personalize genuine
news feed". Sounds pretty much like Facebook. Is that really "giving control back to the users"?<br />
<br />
<hr />
<br />
Omri Barzilay, <a href="https://www.forbes.com/sites/omribarzilay/2017/08/14/why-blockchain-is-the-future-of-the-sharing-economy">Why Blockchain Is The Future Of The Sharing Economy</a> (Forbes, 14 August 2017)<br />
Omri Barzilay, <a href="https://www.forbes.com/sites/omribarzilay/2017/08/21/3-ways-blockchain-is-revolutionizing-cybersecurity">3 Ways Blockchain Is Revolutionizing Cybersecurity</a> (Forbes, 21 August 2017)
<br />
Omri Barzilay, <a href="https://www.forbes.com/sites/omribarzilay/2017/08/28/how-blockchain-is-reinventing-your-news-feed">How Blockchain Is Reinventing Your News Feed</a> (Forbes, 28 August 2017)<br />
Omri Barzilay, <a href="https://www.forbes.com/sites/omribarzilay/2017/08/30/will-blockchain-change-the-way-we-invest">Will Blockchain Change The Way We Invest?</a> (Forbes, 30 August 2017)<br />
<br />
Alexandra Svokos, <a href="http://elitedaily.com/news/politics/barack-obama-actually-visited-hurricane-katrina-victims-haters-get/2059544/">Barack Obama Actually Visited Hurricane Katrina Victims, So Haters Get Out</a> (Elite Daily, 31 August 2017)<br />
<br />
Related Post: <a href="https://rvsoftware.blogspot.co.uk/2017/09/blockchain-and-edge-of-disruption-brexit.html">Blockchain and the Edge of Disruption - Brexit</a> (September 2017)
Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com1tag:blogger.com,1999:blog-7415430.post-43947376591341622432017-03-01T22:42:00.000+00:002017-03-01T22:45:01.717+00:00The PowerPoint CollectionA collection of blogposts about PowerPoint.<br />
<hr />
<br />
<a href="https://demandingchange.blogspot.co.uk/2005/02/corrupting-evidence.html">Corrupting Evidence</a> (Feb 2005)
<br />
<blockquote class="tr_bq">
<a href="http://www.edwardtufte.com/">Edward Tufte</a> is writing a book called Beautiful Evidence, about the proper and improper use of modern rhetorical media, such as PowerPoint.
</blockquote>
<br />
<a href="https://posiwid.blogspot.co.uk/2006/03/powerpoint.html">What Exactly is PowerPoint For?</a> (March 2006) <br />
<blockquote class="tr_bq">
Microsoft is making concerted efforts to improve their own use of PowerPoint, and to encourage others to use it better. Bill Gates spoke without slides in his keynote speech at <a href="https://rvsoftware.blogspot.co.uk/2006/03/mix06-keynote.html">Mix06</a>. </blockquote>
<br />
<a href="https://rvsoftware.blogspot.co.uk/2006/05/beyond-bullet-points.html">Beyond Bullet Points</a> (May 2006) <br />
<blockquote class="tr_bq">
Some of my friends at Microsoft are excited about Cliff Atkinson and his "new"
presentation style, based on the work of psychology professor Richard E.
Mayer.</blockquote>
<br />
<a href="https://rvsoftware.blogspot.co.uk/2006/05/whos-dick-in-winebar.html">Who's the Dick in the Wine Bar</a> (May 2006)<br />
<blockquote class="tr_bq">
If you are accustomed to traditional PowerPoint, beware. You may find these videos disturbing. </blockquote>
<br />
<a href="https://rvsoftware.blogspot.co.uk/2006/11/powerpoint-slides.html">PowerPoint Slides</a> (Nov 2006)<br />
<blockquote class="tr_bq">
It is not Microsoft's fault if the Pentagon makes inappropriate use of
the available tools. Loads of stupid documents have been written in
Word, and loads of bad accounts produced in Excel. But it is PowerPoint
gets most of the criticism.</blockquote>
<br />
<a href="https://rvsoftware.blogspot.co.uk/2009/10/blame-powerpoint.html">Blame PowerPoint</a> (Oct 2009)<br />
<blockquote class="tr_bq">
If different groups or communities use PowerPoint differently, there may
be many different PowerPoints-in-use corresponding to a single
PowerPoint-as-built.</blockquote>
<br />
<a href="https://demandingchange.blogspot.co.uk/2010/04/visualizing-complexity.html">Visualizing Complexity</a> (April 2010)<br />
<blockquote class="tr_bq">
Lot of people have been mocking a diagram that attempts to visualize the
complexity of the situation in Afghanistan using system dynamics,
rendered as a PowerPoint slide. (Many people have chosen to blame
PowerPoint for the complexity of this diagram.) See also <a href="https://demandingchange.blogspot.co.uk/2010/07/understanding-complexity.html">Understanding Complexity</a> (July 2010)</blockquote>
<br />
<a href="https://rvsoapbox.blogspot.co.uk/2010/11/visual-cliche-in-architectural.html">Visual Cliché in Architectural Discourse</a> (Nov 2010) <br />
<blockquote class="tr_bq">
The visual language of architectural discourse, from enterprise to
software, is surprisingly weak. Many diagrams look as if they may have
started as meaningful sentences, but they have been transformed into
diagrams by discarding most of the words and putting the remaining words
into coloured shapes, arranged artistically on the slide.</blockquote>
<br />Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0