Showing posts with label decision-support. Show all posts
Showing posts with label decision-support. Show all posts

Saturday, April 26, 2014

Does Big Data Release Information Energy?

@michael_saylor of #MicroStrategy says that the Information Revolution is about harnessing "information energy" (The Mobile Wave, p 221). He describes information as a kind of fuel that generates "decision motion", driving people - and machines - to make a decision and take a course of action.

We already know that putting twice as much fuel into a vehicle doesn't make it twice as fast or twice as reliable. (Indeed, aeroplanes sometimes dump fuel to enable a safer landing.) But Saylor explains that information energy is not the same as physical energy.

1. Information energy doesn't follow conservation laws. Information can be created, consumed repeatedly, but never depleted or destroyed. (Unless it is lost or forgotten.)

2. Whereas physical energy is additive, the energy content of information is exponential.

3. The value of information depends on its use, and who is using it.


Let's look at his example.

"Total wheat production for a single year is valuable information; but total wheat production for ten years, combined ten years of rainfall data and ten years of fertilizer represents thirty times more data droplets, but probably contains one hundred times more information energy, because it shows trends and correlations that will drive a greater number of decisions." (pp 221-2).

In other words, thirty times as much data produces a hundred times more information. He doesn't say this extra information MAY drive more decisions, he says it WILL drive more decisions. In other words, the Information Revolution (and our increasing reliance on tools such as MicroStrategy's products) is a historical inevitability.

But is it really true that more data produces more information in this exponential way? In practice, there is a depreciation effect for historical or remote data, because an accumulation of small changes in working practices and technologies can make direct comparison misleading or impossible. So even if the farmer had twenty years' worth of data, or shared data from thousands of other farmers, it would not necessarily help her to make better decisions. Five years' data might be almost as good as ten years'.

Data is moving faster than ever before; we're also storing and processing more and more of it. But that doesn't mean we're just hoarding data, says Duncan Ross, director of data sciences at Teradata, "The pace of change of markets generally is so rapid that it doesn't make sense to retain information for more than a few years." (Charles Arthur, Tech giants may be huge, but nothing matches big data, Guardian 23 August 2013)

According to Saylor, the key to releasing information energy is mobile technology.

"The shocking thing about information is not how much there is, but how inaccessible it is despite the immense value it represents. ... Mobile computing puts information energy in hands of individuals during all waking hours and everywhere they are." (p 224)

What kind of decisions does Saylor imagine the farmer needs to make while sitting on a tractor or milking the cows? Obvious it would be useful to get an early warning of some emerging problem - for example an outbreak of disease further down the valley, or possible contamination of a batch of feed or fertilizer at the factory. But complex information needs interpretation, and most decisions require serious reflection, not instant reaction.

So it is not clear that providing instant access to large quantities of information is going to improve the quality of decision-making. And giving people twice as much information often leads to further procrastination. Surely the challenge for MicroStrategy is to help people deal with information overload, not just add to it?

Furthermore, as I said in my post Tablets and Hyperactivity (Feb 2013), being "always on" means that you never have long enough to think through something difficult before you are interrupted by another event. There is always another email to attend to, there is always something happening on Twitter or Facebook, and mobile devices encourage and reinforce this kind of hyperactivity.

Saylor concludes that "the acid of technology etches away the unnecessary" (p 237). If only this were true.


Related posts

Service-Oriented Business Intelligence (September 2005)
On The True Nature of Knowledge (April 2014)


Updated 19 June 2014

Thursday, January 17, 2013

Business Signal Optimization

@DouglasMerrill of @ZestFinance (via @dhinchcliffe) tells us A Practical Approach to Reading Signals in Data (HBR Blogs November 2012)

If we think of data in tabular form, there are two obvious ways of increasing the size of the table - increasing the number of rows (greater volume of cases) or increasing the number of columns (greater volume of signals). This can either involve a greater variety of variables, as Merrill advocates, or a higher frequency of the same variable. I have talked in the past about the impact of increased granularity on Big Data.

As I understand it, Merrill's company sells Big Data solutions to the insurance underwriting industry, and its algorithms use thousands of different indicators to calculate risk.

The first question I always have in regard to such sophisticated decision-support technologies is what the feedback and monitoring loop looks like. If the decision is fully automated, then it would be good to have some mechanism to monitor the accuracy of the algorithm's predictions. Difficulty here is that there is usually no experimental control, so there is no direct way of learning whether the algorithm is being over-cautious. I call this one-sided learning,

Where the decision involves some human intervention, this gives us some further things to think about in evaluating the effectiveness of the decision-support. What are the statistical patterns of human intervention, and how do these relate to the way the decision-support software presents its recommendations?

Suppose that statistical analysis shows that the humans are basing their decisions on a much smaller subset of indicators, and that much of the data being presented to the human decision-makers is being systematically ignored. This could mean either that the software is too complicated (over-engineered) or that the humans are too simple-minded (under-trained). I have asked many CIOs whether they carry out this kind of statistical analysis, but most of them seem to think their responsibility for information management ends when they have provided the users with the requested information or service, therefore how this information or service is used is not their problem.

Meanwhile, the users may well have alternative sources of information, such as social media. One of the challenges Dion Hinchcliffe raises is how these richer sources of information can be integrated with the tabular data on which the traditional decision-support tools are based. I think this is what Dion means by "closing the clue gap".




Dion Hinchcliffe, The enterprise opportunity of Big Data: Closing the "clue gap" (ZDNet August 2011)

Dion Hinchcliffe, How social data is changing the way we do business (ZDNet Nov 2012)

Douglas Merrill, A Practical Approach to Reading Signals in Data (HBR Blogs November 2012)





Places are still available on my forthcoming workshops Business Awareness (Jan 28), Business Architecture (Jan 29-31), Organizational Intelligence (Feb 1).