Back in September 2005, I commented on some material
by MicroStrategy identifying Five Types of Business Intelligence. I
arranged these five types into a 2x2 matrix, and commented on the fact
that the top right quadrant was then empty.
The
Cloud BI and analytics vendor Birst has now produced a similar matrix
to explain what it is calling Networked BI, placing it in the top
right quadrant. Gartner has been talking about Mode 1 (conventional) and
Mode 2 (self-service) approaches to BI, so Birst is calling this
Mode 3.
While
there are some important technological advances and enablers in the
Mode 3 quadrant, I also see it as a move towards Collaborative BI, which
is about the collective ability of the organization to design
experiments, to generate analytical insight, to interpret results, and
to mobilize action and improvement. This means not only sharing the data, but also sharing the insight and the actioning of the insight. Thus we are not only
driving data and analytics to the edge of the organization, but also
developing the collective intelligence of the organization to use data
and analytics in an agile yet joined-up way.
I first mentioned Collaborative BI on my blog during 2005, and discussed it
further in my article for the CBDI Journal in October 2005. The concept
started to gather momentum a few years later, thanks to Gartner, which
predicted the development of collaborative decision-making in 2009, as
well as some interesting work by Wayne Eckerson. Also around this time,
there were some promising developments by a few BI vendors, including
arcplan and TIBCO. But internet searches for the concept are dominated by material between 2009 and 2012, and things seem to have gone quiet recently.
Previous posts in this series
Service-Oriented Business Intelligence (September 2005)
From Business Intelligence to Organizational Intelligence (May 2009)
TIBCO Platform for Organizational Intelligence (March 2011)
Other sources
Gartner Reveals Five Business Intelligence Predictions for 2009 and Beyond (Gartner, January 2009). Dave Linthicum, Let's See How Gartner is Doing (ebizQ, May 2009)
Chris Middleton, Business Intelligence: Collaborative Decision-Making (Computer Weekly, July 2009)
Ian Bertram, Collaborative Decision-Making Platforms (Gartner 2011)
Wayne Eckerson, Collaborative Business Intelligence: Optimizing the Process of Making Decisions (April 2012)
Monique Morgan, Collaborative BI: Today and Tomorrow (arcplan, April 2012)
Tiemo Winterkamp, Top 5 Collaborative BI Solution Criteria (arcplan, April 2012)
Cliff Saran, Prepare for two modes of business intelligence, says Gartner (Computer Weekly, March 2015)
The Future of BI is Networked (Birst, March 2016)
Updated 21 April 2016 (image corrected)
Showing posts with label BI. Show all posts
Showing posts with label BI. Show all posts
Sunday, April 03, 2016
Saturday, April 26, 2014
Does Big Data Release Information Energy?
@michael_saylor of #MicroStrategy says that the Information Revolution is about harnessing "information energy" (The Mobile Wave, p 221). He describes information as a kind of fuel that generates "decision motion", driving people - and machines - to make a decision and take a course of action.
We already know that putting twice as much fuel into a vehicle doesn't make it twice as fast or twice as reliable. (Indeed, aeroplanes sometimes dump fuel to enable a safer landing.) But Saylor explains that information energy is not the same as physical energy.
1. Information energy doesn't follow conservation laws. Information can be created, consumed repeatedly, but never depleted or destroyed. (Unless it is lost or forgotten.)
2. Whereas physical energy is additive, the energy content of information is exponential.
3. The value of information depends on its use, and who is using it.
Let's look at his example.
In other words, thirty times as much data produces a hundred times more information. He doesn't say this extra information MAY drive more decisions, he says it WILL drive more decisions. In other words, the Information Revolution (and our increasing reliance on tools such as MicroStrategy's products) is a historical inevitability.
But is it really true that more data produces more information in this exponential way? In practice, there is a depreciation effect for historical or remote data, because an accumulation of small changes in working practices and technologies can make direct comparison misleading or impossible. So even if the farmer had twenty years' worth of data, or shared data from thousands of other farmers, it would not necessarily help her to make better decisions. Five years' data might be almost as good as ten years'.
Data is moving faster than ever before; we're also storing and processing more and more of it. But that doesn't mean we're just hoarding data, says Duncan Ross, director of data sciences at Teradata, "The pace of change of markets generally is so rapid that it doesn't make sense to retain information for more than a few years." (Charles Arthur, Tech giants may be huge, but nothing matches big data, Guardian 23 August 2013)
According to Saylor, the key to releasing information energy is mobile technology.
What kind of decisions does Saylor imagine the farmer needs to make while sitting on a tractor or milking the cows? Obvious it would be useful to get an early warning of some emerging problem - for example an outbreak of disease further down the valley, or possible contamination of a batch of feed or fertilizer at the factory. But complex information needs interpretation, and most decisions require serious reflection, not instant reaction.
So it is not clear that providing instant access to large quantities of information is going to improve the quality of decision-making. And giving people twice as much information often leads to further procrastination. Surely the challenge for MicroStrategy is to help people deal with information overload, not just add to it?
Furthermore, as I said in my post Tablets and Hyperactivity (Feb 2013), being "always on" means that you never have long enough to think through something difficult before you are interrupted by another event. There is always another email to attend to, there is always something happening on Twitter or Facebook, and mobile devices encourage and reinforce this kind of hyperactivity.
Saylor concludes that "the acid of technology etches away the unnecessary" (p 237). If only this were true.
Related posts
Service-Oriented Business Intelligence (September 2005)
On The True Nature of Knowledge (April 2014)
Updated 19 June 2014
We already know that putting twice as much fuel into a vehicle doesn't make it twice as fast or twice as reliable. (Indeed, aeroplanes sometimes dump fuel to enable a safer landing.) But Saylor explains that information energy is not the same as physical energy.
1. Information energy doesn't follow conservation laws. Information can be created, consumed repeatedly, but never depleted or destroyed. (Unless it is lost or forgotten.)
2. Whereas physical energy is additive, the energy content of information is exponential.
3. The value of information depends on its use, and who is using it.
Let's look at his example.
"Total wheat production for a single year is valuable information; but total wheat production for ten years, combined ten years of rainfall data and ten years of fertilizer represents thirty times more data droplets, but probably contains one hundred times more information energy, because it shows trends and correlations that will drive a greater number of decisions." (pp 221-2).
In other words, thirty times as much data produces a hundred times more information. He doesn't say this extra information MAY drive more decisions, he says it WILL drive more decisions. In other words, the Information Revolution (and our increasing reliance on tools such as MicroStrategy's products) is a historical inevitability.
But is it really true that more data produces more information in this exponential way? In practice, there is a depreciation effect for historical or remote data, because an accumulation of small changes in working practices and technologies can make direct comparison misleading or impossible. So even if the farmer had twenty years' worth of data, or shared data from thousands of other farmers, it would not necessarily help her to make better decisions. Five years' data might be almost as good as ten years'.
Data is moving faster than ever before; we're also storing and processing more and more of it. But that doesn't mean we're just hoarding data, says Duncan Ross, director of data sciences at Teradata, "The pace of change of markets generally is so rapid that it doesn't make sense to retain information for more than a few years." (Charles Arthur, Tech giants may be huge, but nothing matches big data, Guardian 23 August 2013)
According to Saylor, the key to releasing information energy is mobile technology.
"The shocking thing about information is not how much there is, but how inaccessible it is despite the immense value it represents. ... Mobile computing puts information energy in hands of individuals during all waking hours and everywhere they are." (p 224)
What kind of decisions does Saylor imagine the farmer needs to make while sitting on a tractor or milking the cows? Obvious it would be useful to get an early warning of some emerging problem - for example an outbreak of disease further down the valley, or possible contamination of a batch of feed or fertilizer at the factory. But complex information needs interpretation, and most decisions require serious reflection, not instant reaction.
So it is not clear that providing instant access to large quantities of information is going to improve the quality of decision-making. And giving people twice as much information often leads to further procrastination. Surely the challenge for MicroStrategy is to help people deal with information overload, not just add to it?
Furthermore, as I said in my post Tablets and Hyperactivity (Feb 2013), being "always on" means that you never have long enough to think through something difficult before you are interrupted by another event. There is always another email to attend to, there is always something happening on Twitter or Facebook, and mobile devices encourage and reinforce this kind of hyperactivity.
Saylor concludes that "the acid of technology etches away the unnecessary" (p 237). If only this were true.
Related posts
Service-Oriented Business Intelligence (September 2005)
On The True Nature of Knowledge (April 2014)
Updated 19 June 2014
Labels:
analytics,
BI,
big data,
decision-support,
MicroStrategy,
mobile,
TotalData
Thursday, January 17, 2013
Business Signal Optimization
@DouglasMerrill of @ZestFinance (via @dhinchcliffe) tells us A Practical Approach to Reading Signals in Data (HBR Blogs November 2012)
If we think of data in tabular form, there are two obvious ways of increasing the size of the table - increasing the number of rows (greater volume of cases) or increasing the number of columns (greater volume of signals). This can either involve a greater variety of variables, as Merrill advocates, or a higher frequency of the same variable. I have talked in the past about the impact of increased granularity on Big Data.
As I understand it, Merrill's company sells Big Data solutions to the insurance underwriting industry, and its algorithms use thousands of different indicators to calculate risk.
The first question I always have in regard to such sophisticated decision-support technologies is what the feedback and monitoring loop looks like. If the decision is fully automated, then it would be good to have some mechanism to monitor the accuracy of the algorithm's predictions. Difficulty here is that there is usually no experimental control, so there is no direct way of learning whether the algorithm is being over-cautious. I call this one-sided learning,
Where the decision involves some human intervention, this gives us some further things to think about in evaluating the effectiveness of the decision-support. What are the statistical patterns of human intervention, and how do these relate to the way the decision-support software presents its recommendations?
Suppose that statistical analysis shows that the humans are basing their decisions on a much smaller subset of indicators, and that much of the data being presented to the human decision-makers is being systematically ignored. This could mean either that the software is too complicated (over-engineered) or that the humans are too simple-minded (under-trained). I have asked many CIOs whether they carry out this kind of statistical analysis, but most of them seem to think their responsibility for information management ends when they have provided the users with the requested information or service, therefore how this information or service is used is not their problem.
Meanwhile, the users may well have alternative sources of information, such as social media. One of the challenges Dion Hinchcliffe raises is how these richer sources of information can be integrated with the tabular data on which the traditional decision-support tools are based. I think this is what Dion means by "closing the clue gap".
Dion Hinchcliffe, The enterprise opportunity of Big Data: Closing the "clue gap" (ZDNet August 2011)
Dion Hinchcliffe, How social data is changing the way we do business (ZDNet Nov 2012)
Douglas Merrill, A Practical Approach to Reading Signals in Data (HBR Blogs November 2012)
Places are still available on my forthcoming workshops Business Awareness (Jan 28), Business Architecture (Jan 29-31), Organizational Intelligence (Feb 1).
If we think of data in tabular form, there are two obvious ways of increasing the size of the table - increasing the number of rows (greater volume of cases) or increasing the number of columns (greater volume of signals). This can either involve a greater variety of variables, as Merrill advocates, or a higher frequency of the same variable. I have talked in the past about the impact of increased granularity on Big Data.
As I understand it, Merrill's company sells Big Data solutions to the insurance underwriting industry, and its algorithms use thousands of different indicators to calculate risk.
The first question I always have in regard to such sophisticated decision-support technologies is what the feedback and monitoring loop looks like. If the decision is fully automated, then it would be good to have some mechanism to monitor the accuracy of the algorithm's predictions. Difficulty here is that there is usually no experimental control, so there is no direct way of learning whether the algorithm is being over-cautious. I call this one-sided learning,
Where the decision involves some human intervention, this gives us some further things to think about in evaluating the effectiveness of the decision-support. What are the statistical patterns of human intervention, and how do these relate to the way the decision-support software presents its recommendations?
Suppose that statistical analysis shows that the humans are basing their decisions on a much smaller subset of indicators, and that much of the data being presented to the human decision-makers is being systematically ignored. This could mean either that the software is too complicated (over-engineered) or that the humans are too simple-minded (under-trained). I have asked many CIOs whether they carry out this kind of statistical analysis, but most of them seem to think their responsibility for information management ends when they have provided the users with the requested information or service, therefore how this information or service is used is not their problem.
Meanwhile, the users may well have alternative sources of information, such as social media. One of the challenges Dion Hinchcliffe raises is how these richer sources of information can be integrated with the tabular data on which the traditional decision-support tools are based. I think this is what Dion means by "closing the clue gap".
Dion Hinchcliffe, The enterprise opportunity of Big Data: Closing the "clue gap" (ZDNet August 2011)
Dion Hinchcliffe, How social data is changing the way we do business (ZDNet Nov 2012)
Douglas Merrill, A Practical Approach to Reading Signals in Data (HBR Blogs November 2012)
Places are still available on my forthcoming workshops Business Awareness (Jan 28), Business Architecture (Jan 29-31), Organizational Intelligence (Feb 1).
Labels:
BI,
big data,
decision-support,
event processing,
orgintelligence,
risk,
risk-trust-security
Monday, March 07, 2011
TIBCO platform for organizational intelligence
By adding tibbr to its established software portfolio, TIBCO has now extended its range of organizational intelligence technologies. Last week I spoke with Stefan Farestam of TIBCO to discuss the present and future prospects for TIBCO customers linking these technologies together in interesting ways.
We talked about three main technology areas: Complex Event Processing (CEP), Business Process Management (BPM) and Enterprise 2.0. For TIBCO at least, these technologies are at different stages of adoption and maturity. TIBCO's CEP and BPM tools have been around for a while, and there is a fairly decent body of experience using these tools to solve business problems. Although the first wave of deployment typically uses each tool in a relatively isolated fashion, Stefan believes these technologies are slowly coming together, as customers start to combine CEP and BPM together to solve more complex business problems.
Much of the experience with CEP has been in tracking real-time operations. For example, telecommunications companies such as Vodafone can use complex event processing to monitor and control service disruptions. This is a critical business concern for these companies, as service disruptions have a strong influence on customer satisfaction and churn. CEP is also used for autodetecting various kinds of process anomalies, from manufacturing defects to fraud.
One of the interesting things about Business Process Management is that it operates at several different tempi, with different feedback loops.
The events detected by CEP can then be passed into the BPM arena, where they are used to trigger various workflows and manual processes. This is one of the ways in which CEP and BPM can be integrated.
Social software and Enterprise 2.0 can also operate at different tempi - from a rapid and goal-directed navigation of the social network within the organization to a free-ranging and unplanned exploration of business opportunities and threats. TIBCO's new product tibbr is organized around topics, allowing and encouraging people to develop and share clusters of ideas and knowledge and experience.
Curiously, the first people inside TIBCO to start using tibbr were the finance people, who used it among other things to help coordinate the flurry of activity at quarter end. (Perhaps it helped that the finance people already shared a common language and a predefined set of topics and concerns.) However, the internal use of tibbr within TIBCO has now spread to most other parts of the organization.
The organization of Enterprise 2.0 around topics appears to provide one possible way of linking with CEP and BPM. A particularly difficult or puzzling event (for example, a recurrent manufacturing problem) can become a topic for open discussion (involving many different kinds of knowledge), leading to a coordinated response. The discussion is then distilled into a resource for solving similar problems in future.
TIBCO talks a great deal about "contextually relevant information", and this provides a common theme across all of these technologies. It helps to think about the different tempi here. In the short term, what counts as "contextually relevant" is preset, enabling critical business processes and automatic controls to be operated efficiently and effectively. In the longer term, we expect a range of feedback loops capable of extending and refining what counts as "contextually relevant".
My eBook on Organizational Intelligence is now available from LeanPub. leanpub.com/orgintelligence
Related posts: Two-Second Advantage (May 2010), Embedding Intelligence into the Business Process (November 2010)
We talked about three main technology areas: Complex Event Processing (CEP), Business Process Management (BPM) and Enterprise 2.0. For TIBCO at least, these technologies are at different stages of adoption and maturity. TIBCO's CEP and BPM tools have been around for a while, and there is a fairly decent body of experience using these tools to solve business problems. Although the first wave of deployment typically uses each tool in a relatively isolated fashion, Stefan believes these technologies are slowly coming together, as customers start to combine CEP and BPM together to solve more complex business problems.
Much of the experience with CEP has been in tracking real-time operations. For example, telecommunications companies such as Vodafone can use complex event processing to monitor and control service disruptions. This is a critical business concern for these companies, as service disruptions have a strong influence on customer satisfaction and churn. CEP is also used for autodetecting various kinds of process anomalies, from manufacturing defects to fraud.
One of the interesting things about Business Process Management is that it operates at several different tempi, with different feedback loops.
- A modelling and discovery tempo, in which the essential and variable elements of the process are worked out. Oftentimes, full discovery of a complex process involves a degree of trial and error.
- An optimization and fine-tuning tempo, using business intelligence and analytics and simulation tools to refine decisions and actions, and improve business outcomes.
- An execution tempo, which applies (and possibly customizes) the process to specific cases.
The events detected by CEP can then be passed into the BPM arena, where they are used to trigger various workflows and manual processes. This is one of the ways in which CEP and BPM can be integrated.
Social software and Enterprise 2.0 can also operate at different tempi - from a rapid and goal-directed navigation of the social network within the organization to a free-ranging and unplanned exploration of business opportunities and threats. TIBCO's new product tibbr is organized around topics, allowing and encouraging people to develop and share clusters of ideas and knowledge and experience.
Curiously, the first people inside TIBCO to start using tibbr were the finance people, who used it among other things to help coordinate the flurry of activity at quarter end. (Perhaps it helped that the finance people already shared a common language and a predefined set of topics and concerns.) However, the internal use of tibbr within TIBCO has now spread to most other parts of the organization.
The organization of Enterprise 2.0 around topics appears to provide one possible way of linking with CEP and BPM. A particularly difficult or puzzling event (for example, a recurrent manufacturing problem) can become a topic for open discussion (involving many different kinds of knowledge), leading to a coordinated response. The discussion is then distilled into a resource for solving similar problems in future.
TIBCO talks a great deal about "contextually relevant information", and this provides a common theme across all of these technologies. It helps to think about the different tempi here. In the short term, what counts as "contextually relevant" is preset, enabling critical business processes and automatic controls to be operated efficiently and effectively. In the longer term, we expect a range of feedback loops capable of extending and refining what counts as "contextually relevant".
- On the one hand, weak signals can be detected and incorporated into routine business processes. Wide-ranging discussion via Enterprise 2.0 can help identify such weak signals.
- On the other hand, statistical analysis of decisions can determine how much of the available information is actually being used. Where a particular item of information appears to have no influence on business decisions, its contextual relevance might need to be reassessed.
My eBook on Organizational Intelligence is now available from LeanPub. leanpub.com/orgintelligence
Related posts: Two-Second Advantage (May 2010), Embedding Intelligence into the Business Process (November 2010)
Labels:
BI,
BPM,
enterprise2,
event processing,
orgintelligence,
TIBCO
Friday, November 19, 2010
Embedding Intelligence into the Business Process
Is the business process evolving from bureaucratic workflow towards some form of flexible intelligence? Some of us have been predicting this for a few years now, but there are some hopeful signs that it may finally be starting to happen.
In this post, I'm going to talk about two specific aspects of this.
The idea of embedded business intelligence has been around for many years. See my blog on Service-Oriented Business Intelligence from September 2005. See also my slideshare presentation.
When software vendors talk about embedded BI, they often mean embedding BI functionality in other pieces of software - for example ERP applications - to allow these applications to produce more interesting reports. There are several niche BI producers in this space, including Jaspersoft, Pentaho and Yellowfin. Brian Gentile of Jaspersoft talks about this in his recent article The BI Revolution: Business Intelligence's Future. TDWI November 10, 2010. For an article explaining the difference between Embedded BI and Integrated BI, see Execution MIH.
For BI to be embedded in the business process, we need to have an understanding of the business process that includes some cognitive task, such as a complex decision, where some business intelligence capability can be used specifically to support this cognitive task. In some cases, the aim might be to make the process faster and more efficient, but more usually the aim is to make the process more powerful and effective.
Embedded BI in this sense can also be related to intelligent event processing, where analytic capability embedded in one process can trigger automatic as well as human responses in other processes. Brian Gentle talks about this in an earlier article The BI Revolution: A New Generation of Analytic Applications. TDWI October 20 2010.
Beyond embedding BI in the business process, we might look forward to a state in which analytics is embedded in the entire enterprise, what Tom Davenport and his colleagues call the Analytic Organization. (See my review of Competing on Analytics.) This is the proper meaning of the term Pervasive BI, which Dave Mittereder defined in 2005 as "empowering everyone in the organization, at all levels, with analytics, alerts and feedback mechanisms" (Pervasive Business Intelligence: Enhancing Key Performance Indicators Information Management Magazine, April 2005).
It seems to me that there are two possible interpretations of McAfee's remark. Sandy's interpretation is that busy knowledge workers simply don't find time to do any Enterprise 2.0 stuff at all, and she concludes that if the business process can still work without it, then the Enterprise 2.0 stuff is discretionary. An alternative interpretation might be that the business knowledge workers don't have enough time to do enough Enterprise 2.0 stuff to get as much intelligence (requisite variety) into the business process as the business really needs. (I happen to prefer the second interpretation, but I don't know whether this is what McAfee really meant.) In other words, it could be that there a trickle of benefit rather than a decent flow.
I'm presuming that the way Enterprise 2.0 is used within the business process is to support specific cognitive tasks, such as interpreting and making sense of events, and making complex decisions. These tasks may be done by an individual knowledge worker, possibly drawing on knowledge made available by co-workers, or may be done collectively by a network of knowledge workers. The quality of sense-making and decision-making doesn't necessarily increase just because you have more people spending more time on it, but with highly complex business situations the opposite is almost certainly true - the quality will be impaired if you have too few people devoting insufficient time and attention.
But I worry a bit when technology vendors merely invoke the magic words "business process" without demonstrating any real understanding. For example, Sandy's blog links to Klint Finley's piece on Tying Enterprise 2.0 to Business Processes, or Creating New Processes for the Social Enterprise, which doesn't say anything I can recognize as being about business process; as far as I can see, it is largely about activity stream filtering as a technical solution for integrating pieces of software. Finley's piece links in turn to a Monday Musing by R "Ray" Wang which states that
I think there may possibly be a business process implicit in there somewhere, but exactly how this business process would be supported by Enterprise 2.0 is left to the imagination. I hope "asses" isn't a Freudian slip.
To be clear, I can see how the technologies Klint and "Ray" are talking about might possibly be embedded into a sociotechnical system to support a real business process. But they aren't actually making the connection, nor are they providing any evidence that anyone else is doing so. Even Michael Idinopulos, who at least sounds as if he knows what he is talking about in The End of the Culture 2.0 Crusade? fails to provide any concrete examples. He may have seen some evidence, but he's not telling us. So (not for the first time) it's left to Tom Davenport to say something useful. In a short blogpost for HBR, he provides a couple of examples of what can be done when the social and structuring aspects of technology are combined (Want Value from Social? Add Structure).
Note: some of the larger software vendors have a stake in several of these areas, and are trying to integrate different product lines. For example, IBM adds predictive analytics and social networking in Cognos 10 (SearchBusinessAnalytics.com, 25 Oct 2010). Meanwhile, the niche software providers may be developing interesting partnerships and collaborations - Brian Gentle emails me with a note about the embedding of Jaspersoft within eBuilder, a Swedish provider of an end-to-end B2B suite of Cloud Supply-Chain Management Processes, to produce what they call a Strategic Management Tool.
Places are still available for my Organizational Intelligence Workshop on December 8th.
In this post, I'm going to talk about two specific aspects of this.
- Embedding business intelligence (BI) into the business process.
- Embedding Enterprise 2.0 into the business process.
Embedded BI
The idea of embedded business intelligence has been around for many years. See my blog on Service-Oriented Business Intelligence from September 2005. See also my slideshare presentation.
When software vendors talk about embedded BI, they often mean embedding BI functionality in other pieces of software - for example ERP applications - to allow these applications to produce more interesting reports. There are several niche BI producers in this space, including Jaspersoft, Pentaho and Yellowfin. Brian Gentile of Jaspersoft talks about this in his recent article The BI Revolution: Business Intelligence's Future. TDWI November 10, 2010. For an article explaining the difference between Embedded BI and Integrated BI, see Execution MIH.
For BI to be embedded in the business process, we need to have an understanding of the business process that includes some cognitive task, such as a complex decision, where some business intelligence capability can be used specifically to support this cognitive task. In some cases, the aim might be to make the process faster and more efficient, but more usually the aim is to make the process more powerful and effective.
Embedded BI in this sense can also be related to intelligent event processing, where analytic capability embedded in one process can trigger automatic as well as human responses in other processes. Brian Gentle talks about this in an earlier article The BI Revolution: A New Generation of Analytic Applications. TDWI October 20 2010.
Beyond embedding BI in the business process, we might look forward to a state in which analytics is embedded in the entire enterprise, what Tom Davenport and his colleagues call the Analytic Organization. (See my review of Competing on Analytics.) This is the proper meaning of the term Pervasive BI, which Dave Mittereder defined in 2005 as "empowering everyone in the organization, at all levels, with analytics, alerts and feedback mechanisms" (Pervasive Business Intelligence: Enhancing Key Performance Indicators Information Management Magazine, April 2005).
Embedded Enterprise 2.0
In her piece Time For Enterprise 2.0 To Get Enterprisey, Sandy Kemsley takes a sceptical look at the extent to which Enterprise 2.0 is supporting the core business."You hear great stories about social software being used to strengthen weak ties through internal social networking, or fostering social production by using a wiki for project documents, but many less stories about using social software to actually run the essential business processes."She quotes Andrew McAfee:
[The CIOs] weren’t too worried that their people would use the tools to waste time or goof off. In fact, quite the opposite; they were concerned that the busy knowledge workers within their companies might not have enough time to participate.And comments:
The fact that the knowledge workers had a choice of whether to participate tells me that the use of social business software is still somewhat discretionary in these companies, that is, it’s not running the core business operations; if it were, there wouldn’t be a question of participation.
It seems to me that there are two possible interpretations of McAfee's remark. Sandy's interpretation is that busy knowledge workers simply don't find time to do any Enterprise 2.0 stuff at all, and she concludes that if the business process can still work without it, then the Enterprise 2.0 stuff is discretionary. An alternative interpretation might be that the business knowledge workers don't have enough time to do enough Enterprise 2.0 stuff to get as much intelligence (requisite variety) into the business process as the business really needs. (I happen to prefer the second interpretation, but I don't know whether this is what McAfee really meant.) In other words, it could be that there a trickle of benefit rather than a decent flow.
I'm presuming that the way Enterprise 2.0 is used within the business process is to support specific cognitive tasks, such as interpreting and making sense of events, and making complex decisions. These tasks may be done by an individual knowledge worker, possibly drawing on knowledge made available by co-workers, or may be done collectively by a network of knowledge workers. The quality of sense-making and decision-making doesn't necessarily increase just because you have more people spending more time on it, but with highly complex business situations the opposite is almost certainly true - the quality will be impaired if you have too few people devoting insufficient time and attention.
But I worry a bit when technology vendors merely invoke the magic words "business process" without demonstrating any real understanding. For example, Sandy's blog links to Klint Finley's piece on Tying Enterprise 2.0 to Business Processes, or Creating New Processes for the Social Enterprise, which doesn't say anything I can recognize as being about business process; as far as I can see, it is largely about activity stream filtering as a technical solution for integrating pieces of software. Finley's piece links in turn to a Monday Musing by R "Ray" Wang which states that
Organizations seeking a marketing edge must digest, interpret, and asses (sic) large volumes of meta data from sources such as Facebook Open Graph.
I think there may possibly be a business process implicit in there somewhere, but exactly how this business process would be supported by Enterprise 2.0 is left to the imagination. I hope "asses" isn't a Freudian slip.
To be clear, I can see how the technologies Klint and "Ray" are talking about might possibly be embedded into a sociotechnical system to support a real business process. But they aren't actually making the connection, nor are they providing any evidence that anyone else is doing so. Even Michael Idinopulos, who at least sounds as if he knows what he is talking about in The End of the Culture 2.0 Crusade? fails to provide any concrete examples. He may have seen some evidence, but he's not telling us. So (not for the first time) it's left to Tom Davenport to say something useful. In a short blogpost for HBR, he provides a couple of examples of what can be done when the social and structuring aspects of technology are combined (Want Value from Social? Add Structure).
Note: some of the larger software vendors have a stake in several of these areas, and are trying to integrate different product lines. For example, IBM adds predictive analytics and social networking in Cognos 10 (SearchBusinessAnalytics.com, 25 Oct 2010). Meanwhile, the niche software providers may be developing interesting partnerships and collaborations - Brian Gentle emails me with a note about the embedding of Jaspersoft within eBuilder, a Swedish provider of an end-to-end B2B suite of Cloud Supply-Chain Management Processes, to produce what they call a Strategic Management Tool.
Places are still available for my Organizational Intelligence Workshop on December 8th.
Friday, June 18, 2010
Device-Driven Business IT Alignment?
@LTucci suggests Using the sex appeal of the iPad to push BI reporting in the C-suite (Total CIO, June 2010). @rtolido glosses this as "looking for better business-IT alignment? Get your CEO an iPad".
Linda Tucci talks about "democratizing business intelligence software" and announces that "users can become masters of their own dashboards!" Although giving more power to CEOs is a curious kind of democratization, I can see that allowing CEOs to become masters of their own dashboards could be interpreted as a move toward some kind of business-IT alignment.
But I hope the reference to the iPad is intended to be satirical, because believing that the CEO would be seduced by some device, thus magically achieving business-IT alignment, would not only show fair contempt for the CEO but also trivialize the notion of alignment. This belief appears to be an extreme form of technology fetishism, christened the device paradigm by the philosopher of technology Albert Borgmann.
For a similar kind of satire, see Newsbiscuit's proposal to give the UK Deputy Prime Minister a toy plastic steering wheel.
Linda Tucci talks about "democratizing business intelligence software" and announces that "users can become masters of their own dashboards!" Although giving more power to CEOs is a curious kind of democratization, I can see that allowing CEOs to become masters of their own dashboards could be interpreted as a move toward some kind of business-IT alignment.
But I hope the reference to the iPad is intended to be satirical, because believing that the CEO would be seduced by some device, thus magically achieving business-IT alignment, would not only show fair contempt for the CEO but also trivialize the notion of alignment. This belief appears to be an extreme form of technology fetishism, christened the device paradigm by the philosopher of technology Albert Borgmann.
For a similar kind of satire, see Newsbiscuit's proposal to give the UK Deputy Prime Minister a toy plastic steering wheel.
Tuesday, May 25, 2010
From Buzz to Actionable Intelligence
I've been looking at software that tracks and analyses mentions of keywords across the Internet (sometimes called Buzz).
Why would anyone want to do this? The first obvious interest is in tracking mindshare. How many people are talking about your product versus its competitors.
But it's not enough just to count the mentions of your product. When Microsoft launched the Zune, this was almost universally compared with the Apple iPod, so within a day or two there were thousands of webpages mentioning both. But unsurprising information is of little value; what's potentially significant here is not the absolute numbers but the relative shifts.
There are some important questions here about the volatility of buzz data. If mindshare fluctuates, is this a significant movement, or just random noise? The challenge is to build up enough statistical history to be able to set realistic action thresholds, and to identify potentially important weak signals for further investigation.
It might seem useful to know exactly what people were saying about the two products - which one they preferred and why. Until recently it has been almost impossible for software (and not always easy for humans, especially in unfamiliar cultural settings) to distinguish an enthusiastic "brilliant" from a sarcastic "brilliant", but Israeli researchers are now claiming a 77% precision in detecting sarcasm.
Joe McKendrick, New algorithm spots sarcasm in customer testimonials (Smart Planet, May 2010)
MacGregor Campbell, Just what we need: sarcasm software (New Scientist, May 2010)
However, tagging mentions according to sentiment still looks a pretty inexact science. Some vendors operating in this space don't include automated sentiment analysis at all (e.g. ConMetrics ); others provide simple trends only, leaving humans to do the detailed analysis (e.g. Lexalytics).
But never mind the technical detail. The point of this kind of business intelligence is that it is actionable. Companies can get an early indication of the success of a marketing campaign, long before mindshare feeds through into sales.
Because we aren't just interested in product mentions - we can also track discussion of particular design features of the product. How many people are talking about battery life or screen size or capacity or cost? This kind of detailed information helps identify the features that the marketing campaigns should emphasize, and may also feed into product development. Obviously if battery life is the most talked-about feature of this class of product, then that's a valuable item of intelligence for product designers as well as for sales and marketing. (I wonder how easy it would be to integrate this kind of business intelligence with a requirements engineering tool/method such as Quality Function Deployment, or a statistical technique such as MaxDiff? See Eric Almquist and Jason Lee, What Do Customers Really Want?, Harvard Business Review, April 2009)
If you have enough high-quality data, with all the automatic replication and spam stripped out, then you can also track the influence paths across the Internet over time. Not only identifying the pages that talk about the Zune versus iPod, but which pages came out first, and which of the earlier pages are strongly referenced by later pages. Not just individual thought leaders but also communities or geographies - for example, a given buzz might start on university campuses before spreading to other demographic sectors. That tells you where you should conduct market trials if you want rapid dissemination, and also where you should go for a relatively isolated trial of some high-risk venture. It also tells you which websites to watch for potential trouble.
What interests me most about this kind of innovation is not the technical details but the potential for transforming the business process - to develop greater organizational intelligence. Two years ago, Onalytica founder Flemming Madsen laid out a vision in his blog Predicting Sales from Online Buzz (Jan 2008) and Predicting Sales from Online Buzz - 2 (April 2008).
But here's the thing I found most exciting. If an organization can develop sufficient confidence in the reliability of the predictions resulting from this kind of business intelligence, then the visible growth of influence and mindshare may enable it to sustain longer-term programmes and campaigns, instead of cancelling projects that don't deliver an immediate commercial return. Some people might imagine that an organization driven by buzz would be excessively short-termist - but the champions of this approach insist that good use of buzz by a truly intelligent organization could have quite the opposite effect.
I have talked to one large organization using this technology, and I'm hoping to publish this as a case study in the near future. In the meantime, I should be delighted to talk to any other organizations, to see what is actually happening in practice.
See also Just Shut Up and Listen, by Kishore S. Swaminathan of Accenture.
Why would anyone want to do this? The first obvious interest is in tracking mindshare. How many people are talking about your product versus its competitors.
But it's not enough just to count the mentions of your product. When Microsoft launched the Zune, this was almost universally compared with the Apple iPod, so within a day or two there were thousands of webpages mentioning both. But unsurprising information is of little value; what's potentially significant here is not the absolute numbers but the relative shifts.
There are some important questions here about the volatility of buzz data. If mindshare fluctuates, is this a significant movement, or just random noise? The challenge is to build up enough statistical history to be able to set realistic action thresholds, and to identify potentially important weak signals for further investigation.
It might seem useful to know exactly what people were saying about the two products - which one they preferred and why. Until recently it has been almost impossible for software (and not always easy for humans, especially in unfamiliar cultural settings) to distinguish an enthusiastic "brilliant" from a sarcastic "brilliant", but Israeli researchers are now claiming a 77% precision in detecting sarcasm.
Joe McKendrick, New algorithm spots sarcasm in customer testimonials (Smart Planet, May 2010)
MacGregor Campbell, Just what we need: sarcasm software (New Scientist, May 2010)
However, tagging mentions according to sentiment still looks a pretty inexact science. Some vendors operating in this space don't include automated sentiment analysis at all (e.g. ConMetrics ); others provide simple trends only, leaving humans to do the detailed analysis (e.g. Lexalytics).
But never mind the technical detail. The point of this kind of business intelligence is that it is actionable. Companies can get an early indication of the success of a marketing campaign, long before mindshare feeds through into sales.
Because we aren't just interested in product mentions - we can also track discussion of particular design features of the product. How many people are talking about battery life or screen size or capacity or cost? This kind of detailed information helps identify the features that the marketing campaigns should emphasize, and may also feed into product development. Obviously if battery life is the most talked-about feature of this class of product, then that's a valuable item of intelligence for product designers as well as for sales and marketing. (I wonder how easy it would be to integrate this kind of business intelligence with a requirements engineering tool/method such as Quality Function Deployment, or a statistical technique such as MaxDiff? See Eric Almquist and Jason Lee, What Do Customers Really Want?, Harvard Business Review, April 2009)
If you have enough high-quality data, with all the automatic replication and spam stripped out, then you can also track the influence paths across the Internet over time. Not only identifying the pages that talk about the Zune versus iPod, but which pages came out first, and which of the earlier pages are strongly referenced by later pages. Not just individual thought leaders but also communities or geographies - for example, a given buzz might start on university campuses before spreading to other demographic sectors. That tells you where you should conduct market trials if you want rapid dissemination, and also where you should go for a relatively isolated trial of some high-risk venture. It also tells you which websites to watch for potential trouble.
What interests me most about this kind of innovation is not the technical details but the potential for transforming the business process - to develop greater organizational intelligence. Two years ago, Onalytica founder Flemming Madsen laid out a vision in his blog Predicting Sales from Online Buzz (Jan 2008) and Predicting Sales from Online Buzz - 2 (April 2008).
- predicting sales, market share and other outcomes
- detect changes in competitors’ behaviour
- setting targets known as “influence budgets”
- using “influence budgets” to predict whether an organization is on track to meet its actual revenue or market share targets, and take remedial action if required
But here's the thing I found most exciting. If an organization can develop sufficient confidence in the reliability of the predictions resulting from this kind of business intelligence, then the visible growth of influence and mindshare may enable it to sustain longer-term programmes and campaigns, instead of cancelling projects that don't deliver an immediate commercial return. Some people might imagine that an organization driven by buzz would be excessively short-termist - but the champions of this approach insist that good use of buzz by a truly intelligent organization could have quite the opposite effect.
I have talked to one large organization using this technology, and I'm hoping to publish this as a case study in the near future. In the meantime, I should be delighted to talk to any other organizations, to see what is actually happening in practice.
See also Just Shut Up and Listen, by Kishore S. Swaminathan of Accenture.
Tuesday, January 26, 2010
New Types of Business Intelligence
Actionable Intelligence
- Having the necessary information immediately available in order to deal with the situation at hand. With regard to call centers, it refers to agents having customer history and related product data available on screen before the call is taken. (Computer Language Company)
- Any intelligence you can use to improve your marketing position within the marketplace. (Reciprocal Consulting)
Appreciative Intelligence
Appreciative Intelligence is the ability to perceive the positive inherent generative potential in a given situation and to act purposively to transform the potential to outcomes. In other words, it is the ability to reframe a given situation to recognize the positive possibilities embedded in it but not apparent to the untrained eye, and to engage in the necessary actions so that the desired outcomes unfold from the generative aspects of the current situation. (Tojo Thatchenkery).Collaborative Intelligence
Orchestrating BI services across a federated management or governance structure. Collaboration between knowledge workers. Identified in Richard Veryard, Service-Oriented Business Intelligence (September 2005) See also From Networked BI to Collaborative BI (April 2016).Decision Analysis
Improving the way we make decisions. More robust collaborative capabilities embedded within BI tools. Formalized methods for evaluating the effectiveness of decisions made with those tools.Wayne Eckerson, The Next Wave in BI: Decision Analysis (TDWI, Jan 2010)
Pervasive Business Intelligence
Stephen Swoyer, Pervasive BI: Still a Vision, Not Reality (TDWI, Jan 2010) via Franz DillRevolutionary Business Intelligence
Brian Gentle, The BI Revolution: A New Generation of Analytic Applications (TDWI, Oct 2010)The BI Revolution: Business Intelligence's Future (TDWI, Nov 2010)
Service-Oriented Business Intelligence
Richard Veryard, Service-Oriented Business Intelligence (September 2005)Updated 3 April 2016
Wednesday, October 26, 2005
Oracle BI
Another briefing on Service-Oriented Business Intelligence (SOBI), this time from Oracle.
Oracle was keen to tell me about the integrated platform for SOA and BI - bundled together from all the products they've acquired recently. (Some analysts have criticized this bundling as Frankenstein, but I tend to agree with Radovan Janacek (Systinet) that this wiring-together is a perfectly valid use (nay, validation) of the power of SOA.)
Oracle is sceptical of all the flavours of BI - real-time BI, operational BI, and so on. Why not just BI? The focus of innovation for Oracle is doing BI better - and they don't seem particularly interested in changing the nature of BI functionality, nor extending its use to new domains. Oracle sees the primary value of SOA in allowing customers to deliver BI functionality more quickly and cheaply.
As a database vendor, Oracle sees the primary challenges of BI as technical ones - the growth and complexity in the quantities and sources of data, and the demands of speed. Moving data into a separate store for BI purposes is often more trouble than it's worth - so the preferred approach nowadays is to deliver all the BI functionality from the database itself. (Obviously Oracle still supports data warehousing as well.) Which means that Oracle is obliged to put a great deal of emphasis on performance and speed. For example, they quote some impressive improvements in the speed of generating OLAP cubes, from several hours to a few minutes.
One of the possible advantages of SOA is that BI functionality can be accessed in new ways. To start with, BI results can be distributed in various ways - via the Oracle portal, or via the collaboration suite. And Business Activity Monitoring (BAM) can be configured to respond to predefined BI events, such as KPI range checks. This is a useful step towards fully integrated BI.
Oracle is not yet convinced about the need to support subscription technologies such as RSS/Atom; this would probably be achieved via the portal as well. But the portal approach works best if BI enquiries are predefined, or at least controlled centrally. BI results are generally in the form of relational data - for example, sets of KPIs, or segmented files of customers. This implies a top-down architecture of BI usage, which looks okay for the kind of organization where human intelligence (in particular advanced analytical skill) is assumed to be concentrated at head office, but seems quite unsuitable for the power-to-the-edge organization.
But SOA may allow broader access to BI functionality - not just the results of predefined BI enquiries but the ability to invoke dynamic BI, rendered as web services. And fully collaborative BI requires not just sharing the BI results, but sharing the BI process. So for example instead of doing the segmentation centrally and distributing a segmented customer file, you might distribute the segmentation algorithm so that (a) it may be applied locally and dynamically, (b) it may be customized locally, and (c) local refinements and improvements can themselves be disseminated.
Oracle is starting to look at extending the SOA-friendly aspects of BI, and I hope we can expect greater support for some of these issues in the next generation of the Oracle platform. No dates or details announced yet.
CBDI report on Service-Oriented Business Intelligence (October 2005)
See also: Oracle Business Intelligence Blog
Technorati Tags: BI business intelligence Oracle SOA
Oracle was keen to tell me about the integrated platform for SOA and BI - bundled together from all the products they've acquired recently. (Some analysts have criticized this bundling as Frankenstein, but I tend to agree with Radovan Janacek (Systinet) that this wiring-together is a perfectly valid use (nay, validation) of the power of SOA.)
Oracle is sceptical of all the flavours of BI - real-time BI, operational BI, and so on. Why not just BI? The focus of innovation for Oracle is doing BI better - and they don't seem particularly interested in changing the nature of BI functionality, nor extending its use to new domains. Oracle sees the primary value of SOA in allowing customers to deliver BI functionality more quickly and cheaply.
As a database vendor, Oracle sees the primary challenges of BI as technical ones - the growth and complexity in the quantities and sources of data, and the demands of speed. Moving data into a separate store for BI purposes is often more trouble than it's worth - so the preferred approach nowadays is to deliver all the BI functionality from the database itself. (Obviously Oracle still supports data warehousing as well.) Which means that Oracle is obliged to put a great deal of emphasis on performance and speed. For example, they quote some impressive improvements in the speed of generating OLAP cubes, from several hours to a few minutes.
One of the possible advantages of SOA is that BI functionality can be accessed in new ways. To start with, BI results can be distributed in various ways - via the Oracle portal, or via the collaboration suite. And Business Activity Monitoring (BAM) can be configured to respond to predefined BI events, such as KPI range checks. This is a useful step towards fully integrated BI.
Oracle is not yet convinced about the need to support subscription technologies such as RSS/Atom; this would probably be achieved via the portal as well. But the portal approach works best if BI enquiries are predefined, or at least controlled centrally. BI results are generally in the form of relational data - for example, sets of KPIs, or segmented files of customers. This implies a top-down architecture of BI usage, which looks okay for the kind of organization where human intelligence (in particular advanced analytical skill) is assumed to be concentrated at head office, but seems quite unsuitable for the power-to-the-edge organization.
But SOA may allow broader access to BI functionality - not just the results of predefined BI enquiries but the ability to invoke dynamic BI, rendered as web services. And fully collaborative BI requires not just sharing the BI results, but sharing the BI process. So for example instead of doing the segmentation centrally and distributing a segmented customer file, you might distribute the segmentation algorithm so that (a) it may be applied locally and dynamically, (b) it may be customized locally, and (c) local refinements and improvements can themselves be disseminated.
Oracle is starting to look at extending the SOA-friendly aspects of BI, and I hope we can expect greater support for some of these issues in the next generation of the Oracle platform. No dates or details announced yet.
CBDI report on Service-Oriented Business Intelligence (October 2005)
See also: Oracle Business Intelligence Blog
Technorati Tags: BI business intelligence Oracle SOA
Friday, October 21, 2005
Focus
I have issued a number of probes to BI vendors for views on the potential synergies between SOA and BI - what I've been calling Service Oriented Business Intelligence (SOBI).
Service-Oriented Business Intelligence (SOAPbox blog)
Web Services to Improve Business Intelligence (CBDI Journal, June 2003)
Service-Oriented Business Intelligence (CBDI Journal, October 2005)
This week I had a useful briefing from Information Builders, makers of WebFocus. I first encountered Focus when I was working with Fourth Generation Languages (4GL) over twenty years ago, and it's interesting to see how the present stance of Focus (which I must now remember to call WebFocus) represents both continuity and change.
Information Builders certainly seem clued up about web services and SOA, and claim to have been playing in this space rather longer than some other BI vendors.
One interesting piece of functionality is that it can be used to make data from legacy systems (such as unstructured data from notes fields) visible to the Google Enterprise Server, and therefore available for aggregation and analysis. May be worth a look.
Of course, technology vendors are naturally prone to make optimistic statements about the value of their tools, and the importance of having everyone using them. In this case, realistic assessments of the value of BI must depend on analysing the potential to improve business processes. Information needs are integrated with specific business responsibilities. Business processes and services may be improved by introducing effective feedback loops - for example if you allow customers to access restaurant inspection data, the dirty restaurants disappear pretty quickly. Note that these control loops typically go outside the boundaries of a single organization.
Instead of business intelligence being a highly specialized function, restricted to head office wonks with expensive and complicated gear, the power of business intelligence is taken to the edge of the organization - together with the corresponding accountability.
There are lots of integration techniques that are relevant here - not just web services, but also Web 2.0 technologies such as Atom and RSS. For example, it is possible to subscribe to a complex enquiry. But we can push this further - imagine being able to subscribe to a hypothesis, and then being notified whenever any relevant evidence (for or against the hypothesis) becomes available.
Service-Oriented Business Intelligence (SOAPbox blog)
Web Services to Improve Business Intelligence (CBDI Journal, June 2003)
Service-Oriented Business Intelligence (CBDI Journal, October 2005)
This week I had a useful briefing from Information Builders, makers of WebFocus. I first encountered Focus when I was working with Fourth Generation Languages (4GL) over twenty years ago, and it's interesting to see how the present stance of Focus (which I must now remember to call WebFocus) represents both continuity and change.
Information Builders certainly seem clued up about web services and SOA, and claim to have been playing in this space rather longer than some other BI vendors.
SOA
IB's main entry into the SOA space is a product called iWay, which calls itself an "Adaptive Framework for SOA" and claims to be "a complete toolset for creating composite applications and reusing existing IT assets". iWay is marketed by a separate company, iWay software, which is a wholly owned subsidiary of Information Builders.One interesting piece of functionality is that it can be used to make data from legacy systems (such as unstructured data from notes fields) visible to the Google Enterprise Server, and therefore available for aggregation and analysis. May be worth a look.
Value of BI
4GLs were always supposed to improve productivity for developers, and to improve access for end-users. This twin agenda clearly remains in force. Information Builders sees business intelligence as a key source of value to any business organization; so the greater the number of people using the BI tools, the greater the potential value to the business. (Well, they would say that wouldn't they?) There is therefore considerable emphasis on improving the accessibility of BI functionality, as well as achieving economics of scale in the delivery of BI functionality.Of course, technology vendors are naturally prone to make optimistic statements about the value of their tools, and the importance of having everyone using them. In this case, realistic assessments of the value of BI must depend on analysing the potential to improve business processes. Information needs are integrated with specific business responsibilities. Business processes and services may be improved by introducing effective feedback loops - for example if you allow customers to access restaurant inspection data, the dirty restaurants disappear pretty quickly. Note that these control loops typically go outside the boundaries of a single organization.
Instead of business intelligence being a highly specialized function, restricted to head office wonks with expensive and complicated gear, the power of business intelligence is taken to the edge of the organization - together with the corresponding accountability.
Integrated BI
I agree that the business value of BI may be greatly enhanced when BI is integrated with the business process. I have been calling this integrated BI; Information Builders talks about operational BI or pervasive BI, which are perhaps not quite the same thing, but are at least broadly in the same area.There are lots of integration techniques that are relevant here - not just web services, but also Web 2.0 technologies such as Atom and RSS. For example, it is possible to subscribe to a complex enquiry. But we can push this further - imagine being able to subscribe to a hypothesis, and then being notified whenever any relevant evidence (for or against the hypothesis) becomes available.
Collaborative BI
I have argued that the next step beyond integrated BI is collaborative BI - supporting collaboration between distributed knowledge workers. Having implemented simple versions of Embedded BI and Integrated BI, Information Builders has announced the intention to introduce some support for Collaborative BI in the 2006 release of WebFocus. I look forward to seeing the details of this. However, I suspect that the full power of Collaborative BI will take longer to develop.BI Process
So how is all this technological product innovation going to be reflected by process innovation - affecting the way that people build and use BI systems and services? Perhaps it is too early to say. Vendors like Information Builders may contribute to this innovation, may disseminate patterns and best practices, and may wish to develop formal methodologies. But I suspect the important changes will emanate from the user community, and will be slower to emerge.
Subscribe to:
Posts (Atom)