@DouglasMerrill of @ZestFinance (via @dhinchcliffe) tells us A Practical Approach to Reading Signals in Data (HBR Blogs November 2012)
If
we think of data in tabular form, there are two obvious ways of
increasing the size of the table - increasing the number of rows
(greater volume of cases) or increasing the number of columns (greater
volume of signals). This can either involve a greater variety of
variables, as Merrill advocates, or a higher frequency of the same
variable. I have talked in the past about the impact of increased
granularity on Big Data.
As I understand it, Merrill's
company sells Big Data solutions to the insurance underwriting industry,
and its algorithms use thousands of different indicators to calculate
risk.
The first question I always have in regard to such sophisticated decision-support technologies is
what the feedback and monitoring loop looks like. If the decision is fully automated,
then it would be good to have some mechanism to monitor the accuracy of
the algorithm's predictions. Difficulty here is that there is usually no
experimental control, so there is no direct way of learning whether the
algorithm is being over-cautious. I call this one-sided learning,
Where the
decision involves some human intervention, this gives us some further
things to think about in evaluating the effectiveness of the
decision-support. What are the statistical patterns of human
intervention, and how do these relate to the way the decision-support
software presents its recommendations?
Suppose that statistical analysis shows that the humans are basing their decisions on a much smaller subset of indicators, and that much of the data being presented to the human decision-makers is being systematically ignored. This could mean either that the software is too complicated (over-engineered) or that the humans are too simple-minded (under-trained). I have asked many CIOs whether they carry out this kind of statistical analysis, but most of them seem to think their responsibility for information management ends when they have provided the users with the requested information or service, therefore how this information or service is used is not their problem.
Meanwhile, the users may well have alternative sources of information, such as social media. One of the challenges Dion Hinchcliffe raises is how these richer sources of information can be integrated with the tabular data on which the traditional decision-support tools are based. I think this is what Dion means by "closing the clue gap".
Dion Hinchcliffe, The enterprise opportunity of Big Data: Closing the "clue gap" (ZDNet August 2011)
Dion Hinchcliffe, How social data is changing the way we do business (ZDNet Nov 2012)
Douglas Merrill, A Practical Approach to Reading Signals in Data (HBR Blogs November 2012)
Places are still available on my forthcoming workshops Business Awareness (Jan 28), Business Architecture (Jan 29-31), Organizational Intelligence (Feb 1).
Showing posts with label risk. Show all posts
Showing posts with label risk. Show all posts
Thursday, January 17, 2013
Thursday, September 01, 2011
Black Swan Blindness
In my post Black Swans and Complex System Failure, I talked about the architectural implications of some recent disasters, including the Gulf of Mexico oil spillage in 2010 and the partial melt-down in Japanese nuclear reactors following the tsunami in 2011. Both of these disasters involved something that isn't supposed to happen: the simultaneous failure of multiple fail-safe mechanisms.
A new study by Oxford University and McKinsey finds a similar phenomenon in technology investment, where large IT projects may experience spiralling costs as a result of multiple problems occurring simultaneously. According to the researchers, this is up to twenty times more frequent than traditional risk modelling techniques would expect, with one in six large IT projects going over budget by an average of over 200%. Researchers refer to the tendency to disregard rare but high-impact problems/risks as black swan blindness.
As an example, Professor Bent Flyvbjerg cites the collapse of Auto Windscreens, which went into administration in February following a disastrous attempt to implement a new IT system. "Black swans often start as purely software issues. But then several things can happen at the same time - economic downturn, financial difficulties - which compound the risk," he explained.
Professor Flyvbjerg has coined the term Black Swan Management, which currently merits its own Wikipedia page. Simon Moore (author of Strategic Project Portfolio Management) questions whether it is appropriate to use the term "black swan" for something that occurs with a one in six probability, but supports Flyvbjerg's conclusion that when projects go wrong they can go extremely wrong.
Flyvbjerg makes five fairly bland recommendations for avoiding IT project failure, including recruiting a "master builder". Some people may interpret this as an endorsement of the large IT service firms, but these firms have been responsible for some of the most extravagent failures. Is there any evidence that master builders are any more immune from "black swan blindness" than anyone else? Indeed, as a Scandinavian, Flyvbjerg will hardly need reminding of Ibsen's portrayal of madness in the play of the same name.
'Black swans' busting IT budgets (BBC News, 26 August 2011)
Bent Flyvbjerg and Alexander Budzier, Why Your IT Project May Be Riskier than You Think (Harvard Business Review, September 2011, pp. 601-603)
Natasha Lomas, Five ways to stop your IT projects spiralling out of control and overbudget (Silicon.com, 22 August 2011) (pdf)
Brenda Michelson, Complexity, Outliers and the Truth on IT Project Failure (HP Input-Output, 31 Aug 2011)
Simon Moore, Black Swans In Project Management (August 25, 2011)
A new study by Oxford University and McKinsey finds a similar phenomenon in technology investment, where large IT projects may experience spiralling costs as a result of multiple problems occurring simultaneously. According to the researchers, this is up to twenty times more frequent than traditional risk modelling techniques would expect, with one in six large IT projects going over budget by an average of over 200%. Researchers refer to the tendency to disregard rare but high-impact problems/risks as black swan blindness.
As an example, Professor Bent Flyvbjerg cites the collapse of Auto Windscreens, which went into administration in February following a disastrous attempt to implement a new IT system. "Black swans often start as purely software issues. But then several things can happen at the same time - economic downturn, financial difficulties - which compound the risk," he explained.
Professor Flyvbjerg has coined the term Black Swan Management, which currently merits its own Wikipedia page. Simon Moore (author of Strategic Project Portfolio Management) questions whether it is appropriate to use the term "black swan" for something that occurs with a one in six probability, but supports Flyvbjerg's conclusion that when projects go wrong they can go extremely wrong.
Flyvbjerg makes five fairly bland recommendations for avoiding IT project failure, including recruiting a "master builder". Some people may interpret this as an endorsement of the large IT service firms, but these firms have been responsible for some of the most extravagent failures. Is there any evidence that master builders are any more immune from "black swan blindness" than anyone else? Indeed, as a Scandinavian, Flyvbjerg will hardly need reminding of Ibsen's portrayal of madness in the play of the same name.
'Black swans' busting IT budgets (BBC News, 26 August 2011)
Bent Flyvbjerg and Alexander Budzier, Why Your IT Project May Be Riskier than You Think (Harvard Business Review, September 2011, pp. 601-603)
Natasha Lomas, Five ways to stop your IT projects spiralling out of control and overbudget (Silicon.com, 22 August 2011) (pdf)
Brenda Michelson, Complexity, Outliers and the Truth on IT Project Failure (HP Input-Output, 31 Aug 2011)
Simon Moore, Black Swans In Project Management (August 25, 2011)
Wednesday, June 08, 2011
Ethics of Risk in Public Sector IT
@tonyrcollins via @glynmoody and @Mark_Antony asks Should winning bidders tell if they suspect a new contract is undeliverable? (8 June 2011) and raises some excellent ethical points about public sector procurement.
One of the functions of good journalism is to hold people and organizations to account. Tony fishes out a speech given in 2004 by Sir Christopher Bland, then chairman of BT, in which he acknowledged incomplete success in previous ventures, and admitted the extraordinary challenges involved in the NPfIT, for which BT had just won three contracts then valued at over £2bn.
There is obviously a difference between something's being extremely difficult and its being impossible. BT executives can fairly claim that they were always open about the chance that it was going to be difficult, and that they didn't know for sure that it was going to be impossible. But at the same time, there is an asymmetry of information here - the supplier is presumably in a better position to assess certain classes of risk than the customer. (Meanwhile, there may be other classes of risk that the customer should know more about than the supplier.)
In my opinion, the ethical issues here are not to do with deliberate concealment of known facts, but of misleading or inadequate assessment of shared risk. The key word in Tony's headline is the word "suspect". So what are the ethics of doubt?
One of the functions of good journalism is to hold people and organizations to account. Tony fishes out a speech given in 2004 by Sir Christopher Bland, then chairman of BT, in which he acknowledged incomplete success in previous ventures, and admitted the extraordinary challenges involved in the NPfIT, for which BT had just won three contracts then valued at over £2bn.
There is obviously a difference between something's being extremely difficult and its being impossible. BT executives can fairly claim that they were always open about the chance that it was going to be difficult, and that they didn't know for sure that it was going to be impossible. But at the same time, there is an asymmetry of information here - the supplier is presumably in a better position to assess certain classes of risk than the customer. (Meanwhile, there may be other classes of risk that the customer should know more about than the supplier.)
In my opinion, the ethical issues here are not to do with deliberate concealment of known facts, but of misleading or inadequate assessment of shared risk. The key word in Tony's headline is the word "suspect". So what are the ethics of doubt?
Labels:
ethics,
publicsector,
risk,
risk-trust-security
Friday, March 04, 2011
IT analysis and trust
@mkrigsman asks "Trust is the currency that matters most. How many analysts / bloggers deserve it?"
@markhillary replies "surely in the same way as a journalist is trusted, by earning it"
@mkrigsman is particularly concerned about those who write about IT failure. (I'm not sure why he singles out that topic, but I note that the concern arose during a conversation with @benioff, boss of Salesforce.) "When someone writes on IT failures ask "What's their angle?". Usually sensationalism, currying favor, or threatening a vendor." When challenged about his own angle by @njames, @mkrigsman replies "I want to expose *why* projects fail, so we understand magnitude of the problem and can improve."
Trust is clearly a difficult issue for software industry analysts. Unfortunately, Michael's answer to Nigel's challenge cannot prove that he doesn't have a hidden agenda, because the untrustworthy are often just as able as the trustworthy to produce a plausible cover story. If we trust Michael it's not because he can answer the challenge but because of his track record.
We also need to ask - trusted by whom. Software companies might prefer industry analysts to be compliant and predictable, but intelligent software users might regard such analysts as being insufficiently independent. Who would you trust to tell you about Microsoft's new platform - someone who is always pro-Microsoft, someone who is always anti-Microsoft, or someone who has a track record of making both positive and negative comments about Microsoft and its competitors?
Of course, this comment doesn't only apply to industry analysts. Robert Scoble, when he worked for Microsoft, made a point of distancing himself from the party line, and he therefore commanded a different kind of attention and respect than did Bill Gates or Steve Ballmer.
From a simplistic software industry perspective, an analyst who talks about IT success might be regarded as a friend, whereas an analyst who talks about IT failure is potentially an enemy. (This might explain Marc Benioff's wish to challenge the hidden agenda of the latter.) While many software and service companies might adopt the from-failure-to-success rhetoric - "the best way to avoid the risk of failure is to buy our software and hire our consultants" - this is not ideal from a sales perspective.
Mark Hillary appeals to a journalistic ethic, which would presumably include things like balance and transparency. But balance is not always appreciated by those with most at stake. In the past, I have written technology reports on new products, which I regarded as generally positive with a few small caveats. (I don't generally waste my time writing about products that are no good.) But the vendors concerned have often regarded my remarks as highly critical. (Fortunately, this over-sensitivity on the part of software companies is now changing, thanks in part to social media, and companies now understand that a robust debate can be just as beneficial as a highly controlled one-way marketing exercise.)
From a narrow software industry perspective, a trustworthy industry analyst is one who satisfies Simon Cameron's definition of an honest politician - "one who, when he is bought, will stay bought". But from a broader perspective, we should surely prefer to trust those industry analysts with independently critical mind, unafraid to ask awkward questions and publish the answers.
With the large industry analysis firms, the question of trust shifts from personal integrity to corporate integrity. The sales pitch for these firms depends not just on isolated flashes of insight from individual analysts, but on the collaborative intelligence of a community of analysts. Corporate integrity depends not just on transparency about the relationship between the work paid for by software vendors and the independent research consumed by CIOs, but also on a coherent and robust research methodology adopted consistently across the firm, typically supported by an apparatus of surveys and structured questionnaires and checklists and spreadsheets. However, there is a potential disconnect between the routine processing of supposedly objective raw data (this product with this market share in this geography in this time period) and the generation of useful interpretation and opinion, which is where the analytical magic and subjectivity comes in. One example of this magic, Gartner's Magic Quadrant, has been challenged in the courts; Gartner's defence has been that MQ represented opinion rather than fact. (See my post The Magic Sorting Hat is Innocent, Okay?) And the complicated relationship between fact and opinion, and the transparency of reasoning and evidence, is surely relevant to the level of trust that can be invested by different stakeholders in such analyses.
By the way, why am I writing about software industry analysis? Obviously, because I want to expose *why* analysis fails, so we understand magnitude of the problem and can improve. How can software industry analysis deliver greater levels of intelligence and value to the software industry as a whole?
@markhillary replies "surely in the same way as a journalist is trusted, by earning it"
@mkrigsman is particularly concerned about those who write about IT failure. (I'm not sure why he singles out that topic, but I note that the concern arose during a conversation with @benioff, boss of Salesforce.) "When someone writes on IT failures ask "What's their angle?". Usually sensationalism, currying favor, or threatening a vendor." When challenged about his own angle by @njames, @mkrigsman replies "I want to expose *why* projects fail, so we understand magnitude of the problem and can improve."
Trust is clearly a difficult issue for software industry analysts. Unfortunately, Michael's answer to Nigel's challenge cannot prove that he doesn't have a hidden agenda, because the untrustworthy are often just as able as the trustworthy to produce a plausible cover story. If we trust Michael it's not because he can answer the challenge but because of his track record.
We also need to ask - trusted by whom. Software companies might prefer industry analysts to be compliant and predictable, but intelligent software users might regard such analysts as being insufficiently independent. Who would you trust to tell you about Microsoft's new platform - someone who is always pro-Microsoft, someone who is always anti-Microsoft, or someone who has a track record of making both positive and negative comments about Microsoft and its competitors?
Of course, this comment doesn't only apply to industry analysts. Robert Scoble, when he worked for Microsoft, made a point of distancing himself from the party line, and he therefore commanded a different kind of attention and respect than did Bill Gates or Steve Ballmer.
From a simplistic software industry perspective, an analyst who talks about IT success might be regarded as a friend, whereas an analyst who talks about IT failure is potentially an enemy. (This might explain Marc Benioff's wish to challenge the hidden agenda of the latter.) While many software and service companies might adopt the from-failure-to-success rhetoric - "the best way to avoid the risk of failure is to buy our software and hire our consultants" - this is not ideal from a sales perspective.
Mark Hillary appeals to a journalistic ethic, which would presumably include things like balance and transparency. But balance is not always appreciated by those with most at stake. In the past, I have written technology reports on new products, which I regarded as generally positive with a few small caveats. (I don't generally waste my time writing about products that are no good.) But the vendors concerned have often regarded my remarks as highly critical. (Fortunately, this over-sensitivity on the part of software companies is now changing, thanks in part to social media, and companies now understand that a robust debate can be just as beneficial as a highly controlled one-way marketing exercise.)
From a narrow software industry perspective, a trustworthy industry analyst is one who satisfies Simon Cameron's definition of an honest politician - "one who, when he is bought, will stay bought". But from a broader perspective, we should surely prefer to trust those industry analysts with independently critical mind, unafraid to ask awkward questions and publish the answers.
With the large industry analysis firms, the question of trust shifts from personal integrity to corporate integrity. The sales pitch for these firms depends not just on isolated flashes of insight from individual analysts, but on the collaborative intelligence of a community of analysts. Corporate integrity depends not just on transparency about the relationship between the work paid for by software vendors and the independent research consumed by CIOs, but also on a coherent and robust research methodology adopted consistently across the firm, typically supported by an apparatus of surveys and structured questionnaires and checklists and spreadsheets. However, there is a potential disconnect between the routine processing of supposedly objective raw data (this product with this market share in this geography in this time period) and the generation of useful interpretation and opinion, which is where the analytical magic and subjectivity comes in. One example of this magic, Gartner's Magic Quadrant, has been challenged in the courts; Gartner's defence has been that MQ represented opinion rather than fact. (See my post The Magic Sorting Hat is Innocent, Okay?) And the complicated relationship between fact and opinion, and the transparency of reasoning and evidence, is surely relevant to the level of trust that can be invested by different stakeholders in such analyses.
By the way, why am I writing about software industry analysis? Obviously, because I want to expose *why* analysis fails, so we understand magnitude of the problem and can improve. How can software industry analysis deliver greater levels of intelligence and value to the software industry as a whole?
Labels:
risk,
risk-trust-security,
softwareindustryanalysis,
trust
Friday, February 18, 2011
Jeopardy and Risk
@Forrester's Andras Cser notes the victory of IBM's Watson computer in a TV quiz game, and asks How Can You Capitalize On This In Risk And Fraud Management?
In his short blogpost, Cser doesn't offer an answer to this question. He merely makes one assertion and one prediction.
Firstly he asserts an easy and superficial connection between the game of Jeopardy and the profession of security, based on "the complexity, amount of unstructured background information, and the real-time need to make decisions." Based on this connection, he makes a bold prediction on behalf of Forrester.
As this is presented as a corporate prediction rather than merely a personal opinion, I'm assuming that this has gone through some kind of internal peer review, and is based on an analytical reasoning process supported by detailed discussions with the IBM team responsible for Watson. I'm assuming Forrester has a robust model of decision-making that justifies Cser's confidence that the Jeopardy victory can be easily translated into the fraud management and data protection domain within the current generation of technology. (Note that the prediction refers to what Watson will be able to do, not what some future computer might be able to do.)
For my part, I have not yet had the opportunity to talk with the IBM team and congratulate them on their victory, but there are some important questions to explore. I think one of the most interesting elements of the Watson victory is not the complexity - which other commentators such as Paul Miller of Engadget have downplayed - but the apparent ability to outwit the other competitors. This ability may well be relevant to a more agile and intelligent approach to security, but that's a long way from the simplistic connection identified by Cser. Meanwhile, I look forward to seeing the evidence that Watson is capable of analysing root causes, which would be a lot harder than winning at Jeopardy.
Paul Miller, Watson wins it all, humans still can do some other cool things (Engadget 16 Feb 2011)
IBM's Watson supercomputer crowned Jeopardy king (BBC News 17 Feb 2011)
In his short blogpost, Cser doesn't offer an answer to this question. He merely makes one assertion and one prediction.
Firstly he asserts an easy and superficial connection between the game of Jeopardy and the profession of security, based on "the complexity, amount of unstructured background information, and the real-time need to make decisions." Based on this connection, he makes a bold prediction on behalf of Forrester.
"Forrester predicts that the same levels of Watson's sophistication will appear in pattern recognition in fraud management and data protection. If Watson can answer a Jeopardy riddle in real time, it will certainly be able to find patterns of data loss, clustering security incidents, and events, and find root causes of them. Mitigation and/or removal of those root causes will be easy, compared to identifying them."
As this is presented as a corporate prediction rather than merely a personal opinion, I'm assuming that this has gone through some kind of internal peer review, and is based on an analytical reasoning process supported by detailed discussions with the IBM team responsible for Watson. I'm assuming Forrester has a robust model of decision-making that justifies Cser's confidence that the Jeopardy victory can be easily translated into the fraud management and data protection domain within the current generation of technology. (Note that the prediction refers to what Watson will be able to do, not what some future computer might be able to do.)
For my part, I have not yet had the opportunity to talk with the IBM team and congratulate them on their victory, but there are some important questions to explore. I think one of the most interesting elements of the Watson victory is not the complexity - which other commentators such as Paul Miller of Engadget have downplayed - but the apparent ability to outwit the other competitors. This ability may well be relevant to a more agile and intelligent approach to security, but that's a long way from the simplistic connection identified by Cser. Meanwhile, I look forward to seeing the evidence that Watson is capable of analysing root causes, which would be a lot harder than winning at Jeopardy.
Paul Miller, Watson wins it all, humans still can do some other cool things (Engadget 16 Feb 2011)
IBM's Watson supercomputer crowned Jeopardy king (BBC News 17 Feb 2011)
Wednesday, December 09, 2009
IT suppliers face architectural risk
@tonyrcollins reports on the implications for large IT contracts of the Centrica v Accenture dispute (Computer Weekly, 9 December 2009). The dispute concerns a "best-of-breed" replacement billing system for the entire British Gas business, which Centrica ordered from Accenture in 2002.
Centrica is invoking a clause in the contract that refers to "fundamental defects", and a lot of the legal activity has been trying to determine what this phrase actually means. Although Accenture argues that the various problems experienced with the system have been unconnected and therefore don't count as fundamental, the High Court has accepted Centrica's interpretation that the cumulative effect of these defects may indeed be regarded as fundamental.
The article quotes Peter Clough, head of disputes at law firm Osborne Clarke:
So this is about architecture and risk. From a risk management perspective, a critical responsibility of the architect is to make sure that a lot of small problems don't add up to a big problem.
And it is also about procurement and risk. If this judgement stands, it appears to shift certain kinds of risk from the customer to the supplier. Obviously one solution to this would be to redraft procurement contracts. But another solution may be that large IT suppliers may be required to engage much more proactively with the broader architectural context for the systems they are building.
So can we expect all the major IT suppliers to look at architecture and risk from a new perspective?
Centrica is invoking a clause in the contract that refers to "fundamental defects", and a lot of the legal activity has been trying to determine what this phrase actually means. Although Accenture argues that the various problems experienced with the system have been unconnected and therefore don't count as fundamental, the High Court has accepted Centrica's interpretation that the cumulative effect of these defects may indeed be regarded as fundamental.
The article quotes Peter Clough, head of disputes at law firm Osborne Clarke:
"One of the important points to note about this case is that IT suppliers can be liable for claims for fundamental breach arising from the cumulative effect of a series of faults, each of which could look relatively minor in isolation. The majority of systems will of course be inter-linked so that a defect in part of the process could affect another part, snowballing into a more serious issue."
So this is about architecture and risk. From a risk management perspective, a critical responsibility of the architect is to make sure that a lot of small problems don't add up to a big problem.
And it is also about procurement and risk. If this judgement stands, it appears to shift certain kinds of risk from the customer to the supplier. Obviously one solution to this would be to redraft procurement contracts. But another solution may be that large IT suppliers may be required to engage much more proactively with the broader architectural context for the systems they are building.
So can we expect all the major IT suppliers to look at architecture and risk from a new perspective?
Labels:
EA,
enterprise architecture,
procurement,
risk,
risk-trust-security
Subscribe to:
Posts (Atom)