William Hopkins of the Knowledge Capital Group, quoted by Jessica Tsai Who Analyzes the Analysts (Destination CRM, March 2010), sums up the value proposition of an analyst into three words: insight, influence, and exposure. One obvious but flawed evaluation of credibility would be to measure exposure, inferring influence from exposure, and inferring insight from influence.
This would be fine if those with the greatest exposure and influence were always those with the deepest insight. But I challenge the notion that these three factors are as tightly correlated as the expensive analyst firms would have us believe. See the draft manifesto for Next Practice Industry Analysis.
Update March 8 2010
In an earlier version of this blog post, I quoted Barbara French of Tekrati. Barbara had been quoted in Jessica Tsai's article as follows."When it comes to determining analyst credibility, French says, all the standard evaluation metrics apply: number of reference clients; published research; accurate forecasts; venues for (and reactions to) speaking engagements; press mentions; association membership and participation; and professional-network status."
We had a quick exchange on Twitter as follows.
@richardveryard Article quotes Hopkins: insight, influence, exposure. Your measure of credibility infers influence and insight from exposure. Fair?
@bfr3nch I advise looking for proof of industry analyst expertise/respect in media. Does that make sense?
@richardveryard Barbara French of Tekrati advises "looking for proof of industry analyst expertise/respect in media".
@bfr3nch You dropped the context. You only asked me about exposure. It's 1 of several measures of analyst expertise and respect.
@richardveryard My manifesto suggests that those with greatest exposure don't always have deepest insight. @bfr3nch completely agree
Barbara regards the respect of other analysts as an important indicator of analyst insight. I certainly agree that it is a better indicator than simple exposure, and I had failed to distinguish this. However, I still think it is an unsatisfactory indicator, because it places too much emphasis on majority opinion. As it happens, there is a debate on BBC Newsnight this evening about university funding cuts, including proposals to abolish the only chair in the country for palaeography (the study of ancient handwriting) at King's College London, prompting the assertion that only palaeographers can evaluate the importance of other palaeographers. I am uncomfortable with this assertion, although in the case of paleography I don't know what the alternative is.
In the case of IT industry analysis, however, I do know what the alternative is. If we want to know how good an analyst's predictions are, assuming that's how we judge the quality of insight, the best way is to go back three years later and see how things actually worked out. Did IBM's vision prove far-sighted, did Microsoft's strategy prove effective, did Oracle's market share increase as expected, did the technology converge and cluster in the expected ways? And this in turn would only be possible if analysts were more willing to present their predictions in falsifiable terms, rather than vague uncertainties like "SAP has a good chance of dominance in this sector ... " which as a prediction is about as useless as saying that an athlete has a good chance of winning a gold medal at the next Olympics, as long as he trains hard and runs faster than everyone else. So I don't care if all the tennis journalists in the world think Roger Federer has got a good chance of winning a Gold medal in 2012, let's go back and see what they said about him before the 2008 Olympics.
But industry analysts don't always make it easy to track their prophecies over time. For example, Laurence Hart (October 2009) says that "Gartner really doesn’t want you to compare a vendor’s location in the MQ from year to year".
No comments:
Post a Comment