Friday, January 09, 2009

OWASP Top Ten - Update

OWASP is the Open Web Application Security Project. It is perhaps best-known for publishing Lists of the Top Ten (or more recently Top Twenty-Five) Security Bugs (or Vulnerabilities or Threats or Risks).

Following my earlier post on the OWASP Top Ten, as well as an exchange of emails with someone in the OWASP community, I posted the following question to the OWASP discussion group on Linked-In.

Do Top-Ten Lists distract from a holistic approach to security?

If you ask people to pay attention to the top ten items in a list of threats or vulnerabilities, they will almost inevitably pay less attention to other things. (Intelligent people are aware of the limitations of lists, but even they are not immune to such effects.)

If a security vendor has a particular interest in one item - for example selling protection or detection for a particular threat - then there may be some commercial significance in whether that item makes the top ten or not. So a commercially minded security vendor will look for ways of influencing (aka distorting) the top ten list in his favour.

Meanwhile, intelligent attackers may calculate that a significant portion of security dollars will be consumed by the top ten, leaving other vulnerabilities under-funded.

The OWASP website does contain a page (Where To Go From Here) explaining that the top ten list is only the starting point of a proper security analysis, but this page is very poorly signposted and I suspect that many people never reach this page.

The official purpose of the OWASP list is to educate people about the consequences of security vulnerabilities. But I think there is a broader education purpose, and I fear that top ten lists distract from this purpose.

This prompted a couple of interesting responses, expressing different views on the real purpose of the OWASP Top Ten. Michael Vance said that the items in the top ten list are those most likely to occur or those that are most likely to have the greatest impact. Christian Frichot said that lists are good at removing the low hanging fruit: I interpret this as meaning the most obvious and easiest to fix, which is not necessarily the same as frequency or impact.

In any case, the methodology for creating the OWASP top ten list does not seem to be designed to produce a list with the characteristics required by either Michael or Christian. It is partly based on historical data (frequency but not impact or low-hangingness, as far as I can see), but with some adjustment to allow for some future projections of increased risk. For example, one issue (CSRF) was promoted to the list because the team believed it to be important, but with no evidence produced to support this belief. So is the OWASP Top Ten List really based on a systematic assessment of (generic) likelihood and impact?

In any case, it would be strange if the same list were equally relevant to all applications in all organizations. Do we expect a retail bank to have the same security risks as a nuclear power plant? Do we expect an airline to have the same security risks as an online bookstore?

Clearly it would be stupid to rely completely on the Top Ten List - although I suspect that some people do just that. But my question is more fundamental - what are the grounds for thinking that a top ten list improves the overall process, rather than just adding a redundant step into the process? Christian's argument is interesting - by dealing quickly with the easy and obvious generic vulnerabilities, we can spend more time on the specific ones. But is that what people actually do?

Michael acknowledges that there is a significant disconnect between the way that Top Ten (and Top 20 and Top 25 and even Threat Classification) lists should be used and the way that they are used. He mentions a specific concern that this list will be misused by being improperly inserted into procurement language.

If OWASP were merely an academic organization, it could deny responsibility for how other people use their lists. "We produce the perfect lists, it's not our fault if people abuse them." But if OWASP is trying to make a real practical difference to security, then the actual effects and effectiveness of these lists is important.

Meanwhile, I am happy to see that other security experts agree with my concerns. Gary McGraw (CTO of Cigital) has just published an excellent article called Software [In]security: Top 11 Reasons Why Top 10 (or Top 25) Lists Don’t Work (via Bruce Schneier).


Update (March 2009)

Tom Brennan has just posed a question on the Linked-In discussion: "So what OWASP project are you going to start that will change this?" So the way to influence existing projects within OWASP is to start a rival project is it? What a strange organization!


Related posts: OWASP Top Ten (October 2008), OWASP Top Ten 2010 (January 2010), Low-Hanging Fruit (August 2019)

No comments:

Post a Comment