Thursday, January 17, 2013

Business Signal Optimization

@DouglasMerrill of @ZestFinance (via @dhinchcliffe) tells us A Practical Approach to Reading Signals in Data (HBR Blogs November 2012)

If we think of data in tabular form, there are two obvious ways of increasing the size of the table - increasing the number of rows (greater volume of cases) or increasing the number of columns (greater volume of signals). This can either involve a greater variety of variables, as Merrill advocates, or a higher frequency of the same variable. I have talked in the past about the impact of increased granularity on Big Data.

As I understand it, Merrill's company sells Big Data solutions to the insurance underwriting industry, and its algorithms use thousands of different indicators to calculate risk.

The first question I always have in regard to such sophisticated decision-support technologies is what the feedback and monitoring loop looks like. If the decision is fully automated, then it would be good to have some mechanism to monitor the accuracy of the algorithm's predictions. Difficulty here is that there is usually no experimental control, so there is no direct way of learning whether the algorithm is being over-cautious. I call this one-sided learning,

Where the decision involves some human intervention, this gives us some further things to think about in evaluating the effectiveness of the decision-support. What are the statistical patterns of human intervention, and how do these relate to the way the decision-support software presents its recommendations?

Suppose that statistical analysis shows that the humans are basing their decisions on a much smaller subset of indicators, and that much of the data being presented to the human decision-makers is being systematically ignored. This could mean either that the software is too complicated (over-engineered) or that the humans are too simple-minded (under-trained). I have asked many CIOs whether they carry out this kind of statistical analysis, but most of them seem to think their responsibility for information management ends when they have provided the users with the requested information or service, therefore how this information or service is used is not their problem.

Meanwhile, the users may well have alternative sources of information, such as social media. One of the challenges Dion Hinchcliffe raises is how these richer sources of information can be integrated with the tabular data on which the traditional decision-support tools are based. I think this is what Dion means by "closing the clue gap".




Dion Hinchcliffe, The enterprise opportunity of Big Data: Closing the "clue gap" (ZDNet August 2011)

Dion Hinchcliffe, How social data is changing the way we do business (ZDNet Nov 2012)

Douglas Merrill, A Practical Approach to Reading Signals in Data (HBR Blogs November 2012)





Places are still available on my forthcoming workshops Business Awareness (Jan 28), Business Architecture (Jan 29-31), Organizational Intelligence (Feb 1).


No comments:

Post a Comment