Showing posts with label trust. Show all posts
Showing posts with label trust. Show all posts

Tuesday, January 09, 2018

Blockchain and the Edge of Disruption - Kodak

Shares in Eastman Kodak more than doubled today following the announcement of the Kodakcoin, "a photocentric cryptocurrency to empower photographers and agencies to take greater control in image rights management".

As Andrew Hill points out, blockchain enthusiasts have often mentioned rights management as one of the more promising applications of digital ledger technology. @willms_ listed half a dozen initiatives back in August 2016, and blockchain investor @alextapscott had a piece about it in the Harvard Business Review last year.

In recent years, Kodak has been held up (probably unfairly) as an example of a company that didn't understand digital. Perhaps to rub this message home, today's story in the Verge is illustrated with stock footage of analogue film. But the bounce in the share price indicates that many investors are willing to give Kodak another chance to prove its digital mettle.

However, some commentators are cynical.



The point of blockchain is to support distributed trust. But the rights management service provided by Kodak doesn't rely on distributed trust, it relies entirely on Kodak. If you trust Kodak, you don't need the blockchain to validate a Kodak-operated service; and if you don't trust Kodak, you probably won't be using the service anyway. So what's the point of blockchain in this example?




Chloe Cornish, Kodak pivot to blockchain sends shares flying (FT, 9 January 2018)

Chris Foxx and Leo Kelion, CES 2018: Kodak soars on KodakCoin and Bitcoin mining plans (BBC News, 9 January 2018)

David Gerard, Kodak’s ICO for a stock photo site that doesn’t exist yet. But the stock price! (10 January 2018)

Jeremy Herron, Kodak Surges After Announcing Plans to Launch Cryptocurrency Called 'Kodakcoin' (Bloomberg, 9 January 2018)

Andrew Hill, Kodak’s convenient click into the blockchain (FT, 9 January 2018)

Shannon Liao, Kodak announces its own cryptocurrency and watches stock price skyrocket (The Verge, 9 January 2018)

Willy Shih, The Real Lessons From Kodak’s Decline (Sloan Management Review, Summer 2016)

Don Tapscott and Alex Tapscott, Blockchain Could Help Artists Profit More from Their Creative Works (HBR, 22 March 2017)

Jessie Willms, Is Blockchain-Powered Copyright Protection Possible? (Bitcoin Magazine, 9 August 2016)



Related posts

Blockchain and the Edge of Disruption - Brexit (September 2017)
Blockchain and the Edge of Disruption - Fake News (September 2017)

Sunday, November 08, 2015

How Soon Might Humans Be Replaced At Work?

#CIPAai An interesting debate on Artificial Intelligence took place at the Science Museum this week, sponsored by the Chartered Institute of Patent Agents. When will humans be replaced by computers in any given job?

As this was the professional body for Patent Agents, they decided to pick an example close to their hearts. The specific motion being debated was that a patent would be filed and granted without human intervention within the next 25 years. The motion was passed roughly 80-60.

At first sight, this debate appeared to be an exercise in technological forecasting. When would AI be capable of creating new inventions and correctly drafting the patent application? And when would AI be capable of evaluating a patent application, carrying out the necessary searches, and granting a patent. Is this the kind of thing we should expect when the much vaunted Singularity (predicted from around 2040 onwards) occurs?

Speaking for the motion, Calum Chase and Chrissie Lightfoot were enthusiastic about the technological opportunities of AI. They pointed out the incredible feats that were already achieved as a result of machine learning, including some surprisingly creative solutions to technical problems.

Speaking against the motion, Nigel Hanley and Ilya Kazi acknowledged the great contribution of computer intelligence to support the patent agent and patent examiner, but were sceptical that anyone would trust a computer with such an important task as filing and granting patents. Nigel Hanley pointed out the limitations of internet search, which is of course designed to find things that other people have already found. (As A.A. Milne put it, Thinking With The Majority.)

The motion only required that a single patent be filed and granted without human intervention. It didn't need to be a particularly complicated one. But even to grant a single patent without human intervention would require a change in the law, presumably agreed internationally. (As it happens, my late father Kenneth Veryard was involved in the development of European Patent Law around 25 years ago, so I am aware of the time and painstaking effort required to achieve such international agreements.)

But this reframes the debate: from a technological one about the future capability of computers, to a sociopolitical one about the possibility of institutional change. Even if some algorithm were good enough to compete with humans, at least for some routine patent matters, the question is whether politicians would be willing to entrust these matters to an algorithm.

There are also strange questions of ownership and rights. Examples of computer intelligence always seem to come back to the usual suspects - Google, IBM Watson, and their ilk. If the creativity comes from the large computer networks run by these companies, then the patents will belong to these corporations. When Thomas Watson said, "I think there is a world market for maybe five computers", he wasn't talking about billions of laptops or trillions of internet-enabled things, but the very much smaller number of major computer networks capable of controlling everything else.

Can we realistically expect AI to take over one small area of patent law without taking over the much larger challenge of cleaning up legislation? After all, a genuine superintelligence might well come up with a much better basis for promoting innovation and protecting the interests of inventors than a few ancient principles of patent law.

But perhaps here's the killer argument. As the volume of patent applications increases, the cost of processing them all by hand becomes prohibitive. So governments could be tempted by the cost-savings offered by a clever algorithm. Even though governments have a very bad track record at realising cost savings from IT projects, politicians can often be persuaded to think it will be different this time.

So even if AI patent activity turns out not to be as good as when humans do it, and even if it subsequently results in a lot of seriously expensive litigation, it could seem a lot cheaper in the short-term.


References


http://www.cipadebate.org.uk/

Steven Johnson, Superintelligence Now (How We Get To Next, 28 October 2015)

James Nurton, Could a computer do your job (Managing IP, 3 November 2015)

Wikipedia: Technological Singularity


Related Posts

The End of Google (June 2006), What does a patent say? (February 2023)


Update 2016

For the potential ramifications of robotic legal assistants, see Remus, Dana and Levy, Frank S., Can Robots Be Lawyers? Computers, Lawyers, and the Practice of Law (December 30, 2015). Available at SSRN: http://ssrn.com/abstract=2701092 or http://dx.doi.org/10.2139/ssrn.2701092. Reported by Aviva Rutkin, Artificial intelligence could make lawyers more risk averse (New Scientist 27 January 2016).

See also Ryan Abbott, I Think, Therefore I Invent: Creative Computers and the Future of Patent Law (Boston College Law Review, Vol 57 Issue 4, September 2016). Reported in Iain Thompson, AI software should be able to register its own patents, law prof argues (The Register, 17 October 2016)

Update 2021

Tom Knowles, Patently brilliant ... AI listed as inventor for first time (The Times, 28 July 2021)

Dagmar Monett tweeted Can an #AI invent something? No, it can't. 

David Gunkel replied I understand the issue here, but the question before the court in "Thaler v Commissioner of Patents [2021] FCA 879" was not "Can an #AI invent something?" The question decided by the court was "Can an #AI (DABUS) be named "inventor" on a patent application?" Different questions.

Update 2023

Further news on the DABUS case
AI cannot patent inventions, UK Supreme Court confirms (BBC News 21 December 2023) 

 

updated 18 October 2021, link added 21 Feb 2023, updated 22 Dec 2023

Thursday, November 27, 2014

Misunderstanding CRM and Big Data

Listening to @peter_w_ryan, @markhillary and Alexey Minkevich talking about #CRM and #BigData at the Institute of Directors, sponsored by IBA Group.

Peter cites an Ovum survey showing that Customer Satisfaction is now the number one concern of management, and argues for what Ovum calls Intelligent CRM. (CA announced something under this label back in October 2000. Other products are available.)

Mark says that CRM and Big Data are widely misunderstood, which is certainly true. My own opinion is the first misunderstanding is to think CRM is about managing THE relationship with THE customer, and I completely agree with Clayton Christensen (via Sloan) that this isn't enough. What we really need to focus on is the job the customers are trying to get done when they use your product or service.

Who is good at CRM? Peter cites an example of a professor of marketing who got a personalized service at a certain chain of hotels and has been talking about it ever since. (That's a pretty good coup for the hotel, if we take the story at face value.) Mark cites the video game market, where both the console manufacturers and the large game publishers are able to collect and analyse huge quantities of consumer behaviour.

Is CRM with Big Data merely a new way of taking advantage of customers? Although most people seem oblivious to the privacy and trust risks, the Wall Street Journal this week suggested that the consumer is becoming more savvy and less susceptible to exploitative loyalty schemes and promotions. This might help to explain why Tesco, once a master of the science of retail, now seems to be faltering.

If there is a sustainable business model based on CRM and Big Data, it must surely involve using these technologies to engage intelligently, authentically and ethically with customers, rather than imagining that these technologies can provide a quick fix for stupid organizations to take advantage of compliant customers.



Related Blogs

Customer Orientation (May 2009)

The Science of Retail (April 2012)

Other Articles

Martha Mangelsdorf, Understanding your customer isn't enough (Sloan Review May 2009)

Shelly Banjo and Sara Germano, The End of the Impulse Shopper (Wall Street Journal 25 November 2014)

Intelligent CRM

AI-CRM "An intelligent CRM system with atuo-learning-tunning engine (sic), Aichain offers the most widely used open source business intelligence software in the world." Last updated March 2013

CA rolling out customer relationship management software (ComputerWorld October 2000)

IBA Group "maintains its focus on IT outsourcing that has become a strategy for many organizations seeking to improve their business processes"

Friday, March 04, 2011

IT analysis and trust

@mkrigsman asks "Trust is the currency that matters most. How many analysts / bloggers deserve it?"

@markhillary replies "surely in the same way as a journalist is trusted, by earning it"

@mkrigsman is particularly concerned about those who write about IT failure. (I'm not sure why he singles out that topic, but I note that the concern arose during a conversation with @benioff, boss of Salesforce.) "When someone writes on IT failures ask "What's their angle?". Usually sensationalism, currying favor, or threatening a vendor." When challenged about his own angle by @njames, @mkrigsman replies "I want to expose *why* projects fail, so we understand magnitude of the problem and can improve."

Trust is clearly a difficult issue for software industry analysts. Unfortunately, Michael's answer to Nigel's challenge cannot prove that he doesn't have a hidden agenda, because the untrustworthy are often just as able as the trustworthy to produce a plausible cover story. If we trust Michael it's not because he can answer the challenge but because of his track record.

We also need to ask - trusted by whom. Software companies might prefer industry analysts to be compliant and predictable, but intelligent software users might regard such analysts as being insufficiently independent. Who would you trust to tell you about Microsoft's new platform -  someone who is always pro-Microsoft, someone who is always anti-Microsoft, or someone who has a track record of making both positive and negative comments about Microsoft and its competitors?

Of course, this comment doesn't only apply to industry analysts. Robert Scoble, when he worked for Microsoft, made a point of distancing himself from the party line, and he therefore commanded a different kind of attention and respect than did Bill Gates or Steve Ballmer.

From a simplistic software industry perspective, an analyst who talks about IT success might be regarded as a friend, whereas an analyst who talks about IT failure is potentially an enemy. (This might explain Marc Benioff's wish to challenge the hidden agenda of the latter.) While many software and service companies might adopt the from-failure-to-success rhetoric - "the best way to avoid the risk of failure is to buy our software and hire our consultants" - this is not ideal from a sales perspective.

Mark Hillary appeals to a journalistic ethic, which would presumably include things like balance and transparency. But balance is not always appreciated by those with most at stake. In the past, I have written technology reports on new products, which I regarded as generally positive with a few small caveats. (I don't generally waste my time writing about products that are no good.) But the vendors concerned have often regarded my remarks as highly critical. (Fortunately, this over-sensitivity on the part of software companies is now changing, thanks in part to social media, and companies now understand that a robust debate can be just as beneficial as a highly controlled one-way marketing exercise.)

From a narrow software industry perspective, a trustworthy industry analyst is one who satisfies Simon Cameron's definition of an honest politician - "one who, when he is bought, will stay bought". But from a broader perspective, we should surely prefer to trust those industry analysts with independently critical mind, unafraid to ask awkward questions and publish the answers.

With the large industry analysis firms, the question of trust shifts from personal integrity to corporate integrity. The sales pitch for these firms depends not just on isolated flashes of insight from individual analysts, but on the collaborative intelligence of a community of analysts. Corporate integrity depends not just on transparency about the relationship between the work paid for by software vendors and the independent research consumed by CIOs, but also on a coherent and robust research methodology adopted consistently across the firm, typically supported by an apparatus of surveys and structured questionnaires and checklists and spreadsheets. However, there is a potential disconnect between the routine processing of supposedly objective raw data (this product with this market share in this geography in this time period) and the generation of useful interpretation and opinion, which is where the analytical magic and subjectivity comes in. One example of this magic, Gartner's Magic Quadrant, has been challenged in the courts; Gartner's defence has been that MQ represented opinion rather than fact. (See my post The Magic Sorting Hat is Innocent, Okay?) And the complicated relationship between fact and opinion, and the transparency of reasoning and evidence, is surely relevant to the level of trust that can be invested by different stakeholders in such analyses.

By the way, why am I writing about software industry analysis? Obviously, because I want to expose *why* analysis fails, so we understand magnitude of the problem and can improve. How can software industry analysis deliver greater levels of intelligence and value to the software industry as a whole?

Wednesday, December 18, 2002

Autonomous Computing - Fiefdoms and Fortresses

Pat Helland of Microsoft has proposed the Autonomous Computing model as an application design pattern for cooperation between independent systems that do not trust each other. It has two key notions.

Fiefdom An independent computing environment that refuses to trust any outsiders and maintains tight control over a set of mission critical data 

Emissary A computing component that helps prepare requests to submit to a fiefdom. It operates exclusively on published (snapshot) reference data and single-user data. 

Helland uses the autonomous computing model to explain many of the new types of applications including offline apps, scalable web-farms, B2B apps, content syndication and content aggregation. (How secure are these then?) more

Roger Sessions of Object Watch has combined the Helland model with other elements to produce an elaborate Fortress Model of computer security. A fortress is a self-contained software system, contains business logic (grunts) and private data (strongboxes), and is surrounded by an unbreachable wall. Communication with the outside world passes through a drawbridge, and is controlled by guards and by treaties with allies

I have many reservations about these models. Here are three to be going on with. 

  • Reliance on an absolute, binary notion of trust. Anything or anybody inside the wall is trusted absolutely, anything or anybody outside the wall is mistrusted. 
  • Reliance on simple topology. A wall creates a simple enclosed space, a straightforward boundary between inside and outside. 
  • Reliance on technology. The fortress model depends on firewalls and other security mechanisms. 

 


Pat Helland, Autonomous Computing paper and blogpost (updated December 2020)

Roger Sessions, The Software Fortress Model: A Next Generation Model for Describing Enterprise Software Architectures (Object Watch Newsletter 36, 17 November 2001)

Richard Veryard and Aidan Ward, Web Service Security (CBDI Journal January 2002)


Originally published at http://www.veryard.com/sebpc/security.htm#autonomous

Links updated March 2022 to include Pat Helland's new version