Showing posts with label security. Show all posts
Showing posts with label security. Show all posts

Sunday, April 10, 2022

Lie Detectors at Airports

@jamesvgingerich reports that the EU is putting lie detector robots on its borders. @Abebab is horrified.

 

There are several things worth noting here.

Firstly, lie detectors work by detecting involuntary actions (eye movements, heart rate) that are thought to be a proxy for mendacity. But there are often alternative explanations for these actions, and so the interpretation of these is highly problematic. See my post on Memory and the Law (June 2008)

Secondly, there is a lot of expensive and time-wasting technology installed at airports already, which has dubious value in detecting genuine threats, but may help to make people feel safer. Bruce Schneier calls this Security Theatre. See my posts on the False Sense of Security (June 2019) and Identity, Privacy and Security at Heathrow Terminal Five (March 2008).

What is more important is to consider the consequences of adding this component (whether reliable or otherwise) to the larger system. In my post Listening for Trouble (June 2019), I discussed the use of Aggression Detection microphones in US schools, following an independent study that was carried out with active collaboration from the supplier of the equipment. Obviously this kind of evaluation requires some degree of transparency.

Most important of all is the ethical question. Is this technology biased against certain categories of subject, and what are the real-world consequences of being falsely identified by this system? Is having a human in the loop sufficient protection against the dangers of algorithmic profiling? See Algorithmic Bias (March 2021).

Given the inaccuracy of detection, there may be a significant rate of false positives and false negatives. False positives affect the individual concerned, suffering consequences ranging from inconvenience and delay to much worse. False negatives mean that a person has got away with an undetected lie, so this has consequences for society as a whole. 

How much you think this matters depends on what you think they might be lying about, and how important this is. For example, it may be quicker to say you packed your suitcase yourself and it hasn't been out of your sight, even if this is not strictly true, because any other answer may trigger loads of other time-wasting questions. However, other lies may be more dangerous ...

 


For more details on the background of this initiative, see

Matthias Monroy, EU project iBorderCtrl: Is the lie detector coming or not? (26 April 2021)

Thursday, July 18, 2019

Robust Against Manipulation

As algorithms get more sophisticated, so do the opportunities to trick them. An algorithm can be forced or nudged to make incorrect decisions, in order to yield benefits to a (hostile) third party. John Bates, one of the pioneers of Complex Event Processing, has raised fears of algorithmic terrorism, but algorithmic manipulation may also be motivated by commercial interests or simple vandalism.

An extreme example of this could be a road sign that reads STOP to humans but is misread as something else by self-driving cars. Another example might be false signals that are designed to trigger algorithmic trading and thereby nudge markets. Given the increasing reliance on automatic screening machines at airports and elsewhere, there are obvious incentives for smugglers and terrorists to develop ways of fooling these machines - either to get their stuff past the machines, or to generate so many false positives that the machines aren't taken seriously. And of course email spammers are always looking for ways to bypass the spam filters.

"It will also become increasingly important that AI algorithms be robust against manipulation. A machine vision system to scan airline luggage for bombs must be robust against human adversaries deliberately searching for exploitable flaws in the algorithm - for example, a shape that, placed next to a pistol in one's luggage, would neutralize recognition of it. Robustness against manipulation is an ordinary criterion in information security; nearly the criterion. But it is not a criterion that appears often in machine learning journals, which are currently more interested in, e.g., how an algorithm scales upon larger parallel systems." [Bostrom and Yudkowsky]

One kind of manipulation involves the construction of misleading examples (known in the literature as "adversarial examples"). For example, an example that exploits the inaccuracies of a specific image recognition algorithm to produce an image that will be incorrectly classified, thus producing an incorrect action (or suppressing the correct action).

Another kind of manipulation involves poisoning the model - deliberately feeding a machine learning algorithm with biased or bad data, in order to disrupt or skew its behaviour. (Historical analogy: manipulation of pop music charts.)

We have to assume that some bad actors will have access to the latest technologies, and will themselves be using machine learning and other techniques to design these attacks, and this sets up an arms race between the good guys and the bad guys. Is there any way to keep advanced technologies from getting in the wrong hands?

In the security world, people are familiar with the concept of Distributed Denial of Service (DDOS). But perhaps this now becomes Distributed Distortion of Service. Which may be more subtle but no less dangerous.

While there are strong arguments for algorithmic transparency of automated systems, some people may be concerned that transparency will aid such attacks. The argument here is that the more adversaries can discover about the algorithm and its training data, the more opportunities for manipulation. But it would be wrong to conclude that we should keep algorithms safe by keeping them secret ("security through obscurity"). A better conclusion would be that transparency should be a defence against manipulation, by making it easier for stakeholders to detect and counter such attempts.




John Bates, Algorithmic Terrorism (Apama, 4 August 2010). To Catch an Algo Thief (Huffington Post, 26 Feb 2015)

Nick Bostrom and Eliezer Yudkowsky, The Ethics of Artificial Intelligence (2011)

Ian Goodfellow, Patrick McDaniel and Nicolas Papernot, Making Machine Learning Robust Against Adversarial Inputs (Communications of the ACM, Vol. 61 No. 7, July 2018) Pages 56-66. See also video interview with Papernot.

Neil Strauss, Are Pop Charts Manipulated? (New York Times, 25 January 1996)

Wikipedia: Security Through Obscurity

Related posts: The Unexpected Happens (January 2017)

Wednesday, June 26, 2019

Listening for Trouble

Many US schools and hospitals have installed Aggression Detection microphones that claim to detect sounds of aggression, thus allowing staff or security personnel to intervene to prevent violence. Sound Intelligence, the company selling the system, claims that the detector has helped to reduce aggressive incidents. What are the ethical implications of such systems?

ProPublica recently tested one such system, enrolling some students to produce a range of sounds that might or might not trigger the alarm. They also talked to some of the organizations using it, including a hospital in New Jersey that has now decommissioned the system, following a trial that (among other things) failed to detect a seriously agitated patient. ProPublica's conclusion was that the system was "less than reliable".

Sound Intelligence is a Dutch company, which has been fitting microphones into street cameras for over ten years, in the Netherlands and elsewhere in Europe. This was approved by the Dutch Data Protection Regulator on the argument that the cameras are only switched on after someone screams, so the privacy risk is reduced.

But Dutch cities can be pretty quiet. As one of the developers admitted to the New Yorker in 2008, "We don’t have enough aggression to train the system properly". Many experts have questioned the validity of installing the system in an entirely different environment, and Sound Intelligence refused to reveal the source of the training data, including whether the data had been collected in schools.

In theory, a genuine scream can be identified by a sound pattern that indicates a partial loss of control of the vocal chords, although the accurate detection of this difference can be compromised by audio distortion (known as clipping). When people scream on demand, they protect their vocal chords and do not produce the same sound. (Actors are taught to simulate screams, but the technology can supposedly tell the difference.) So it probably matters whether the system is trained and tested using real screams or fake ones. (Of course, one might have difficulty persuading an ethics committee to approve the systematic production and collection of real screams.)

Can any harm can be caused by such technologies? Apart from the fact that schools may be wasting money on stuff that doesn't actually work, there is a fairly diffuse harm of unnecessary surveillance. Students may learn to suppress all varieties of loud noises, including sounds of celebration and joy. There may also be opportunities for the technologies to be used as a tool for harming someone - for example, by playing a doctored version of a student's voice in order to get that student into trouble. Or if the security guard is a bit trigger-happy, killed.

Technologies like this can often be gamed. For example, a student or ex-student planning an act of violence would be aware of the system and would have had ample opportunity to test what sounds it did or didn't respond to.

Obviously no technology is completely risk-free. If a technology provides genuine benefits in terms of protecting people from real threats, then this may outweigh any negative side-effects. But if the benefits are unproven or imaginary, as ProPublica suggests, this is a more difficult equation.

ProPublica quoted a school principal from a quiet leafy suburb, who justified the system as providing "a bit of extra peace of mind". This could be interpreted as a desire to reassure parents with a false sense of security. Which might be justifiable if it allowed children and teachers to concentrate on schoolwork rather than worrying unnecessarily about unlikely scenarios, or pushing for more extreme measures such as arming the teachers. (But there is always an ethical question mark over security theatre of this kind.)

But let's go back to the nightmare scenario that the system is supposed to protect against. If a school or hospital equipped with this system were to experience a mass shooting incident, and the system failed to detect the incident quickly enough (which on the ProPublica evidence seems quite likely), the incident investigators might want to look at sound recordings from the system. Fortunately, these microphones "allow administrators to record, replay and store those snippets of conversation indefinitely". So that's alright then.

In addition to publishing its findings, ProPublica also published the methodology used for testing and analysis. The first point to note is that this was done with the active collaboration from the supplier. It seems they were provided with good technical information, including the internal architecture of the device and the exact specification of the microphone used. They were able to obtain an exactly equivalent microphone, and could rewire the device and intercept the signals. They discarded samples that had been subject to clipping.

The effectiveness of any independent testing and evaluation is clearly affected by the degree of transparency of the solution, and the degree of cooperation and support provided by the supplier and the users. So this case study has implications, not only for the testing of devices, but also for transparency and system access.




Jack Gillum and Jeff Kao, Aggression Detectors: The Unproven, Invasive Surveillance Technology Schools Are Using to Monitor Students (ProPublica, 25 June 2019)

Jeff Kao and Jack Gillum, Methodology: How We Tested an Aggression Detection Algorithm (ProPublica, 25 June 2019)

John Seabrook, Hello, Hal (New Yorker, 16 June 2008)

P.W.J. van Hengel and T.C. Andringa, Verbal aggression detection in complex social environments (IEEE Conference on Advanced Video and Signal Based Surveillance, 2007)

Groningen makes “listening cameras" permanent (Statewatch, Vol 16 no 5/6, August-December 2006)

Wikipedia: Clipping (Audio)

Related posts: Affective Computing (March 2019), False Sense of Security (June 2019)


Updated 28 June 2019. Thanks to Peter Sandman for pointing out a lack of clarity in the previous version.

Saturday, November 25, 2017

Pax Technica - On Risk and Security

#paxtechnica Some further thoughts arising from the @CRASSHlive conference in Cambridge on The Implications of the Internet of Things. (For a comprehensive account, see @LaurieJ's livenotes.)

Many people are worried about the security implications of the Internet of Things. The world is being swamped with cheap internet-enabled devices. As the manufacturing costs, size and power consumption of these devices are being driven down, most producers have neither the expertise not the capacity to build any kind of security into them.

One of the reasons why this problem is increasing is that it is cheaper to use a general-purpose chip than to design a special purpose chip. So most IoT devices have far more processing power and functionality than they strictly need. This extra functionality can be then coopted for covert or malicious purposes. IoT devices may easily be recruited into a global botnet, and devices from some sources may even have been covertly designed for this purpose.

Sensors are bad enough - baby monitors and sex toys. Additional concerns apply to IoT actuators - devices that can produce physical effects. For example, lightbulbs that can flash (triggering epileptic fits), thermostats that can switch on simultaneously across a city (melting the grid), centrifuges that can spin out of control (attempting to sabotage Iran's nuclear capability).

Jon Crowcroft proposed that some of this could be addressed in terms of safety and liability. Safety is a useful driver for increased regulation, and insurance companies will be looking for ways to protect themselves and their corporate customers. While driverless cars generate much discussion, similar questions of safety and liability arise from any cars containing significant quantities of new technology. What if the brake algorithm fails? And given the recent history of cheat software by car manufacturers, can we trust the car not to alter the driver logs in order to evade liability for an accident?

In many cases, the consumer can be persuaded that there are benefits from internet-enabled devices, and these benefits may depend on some level of interoperability between multiple devices. But we aren't equipped to reason about the trade-off between accessibility/usability and security/privacy.

For comparison's sake, consider a retailer who has to decide whether to place the merchandise in locked glass cases or on open shelves. Open shelves will result in more sales, but also more shoplifting. So the retailer locks up the jewelry but not the pencils or the furniture, and this is based on a common-sense balance of value and risk.

But with the Internet of Things, people generally don't have a good enough understanding of value and risk to be able to reason intelligently about this kind of trade-off. Philip Howard advises users to appreciate that devices "have an immediate function that is useful to you and an indirect function that is useful to others" (p255). But just knowing this is not enough. True security will only arise when we have the kind of transparency (or visibility or unconcealment) that I referenced in my previous post.


Related Posts

Defeating the Device Paradigm (October 2015)
Pax Technica - The Book (November 2017)
Pax Technica - The Conference (November 2017)
The Smell of Data (December 2017)
Outdated Assumptions - Connectivity Hunger (June 2018)


References

Cory Doctorow, The Coming War on General Computation (2011)

Carl Herberger, How hackers will exploit the Internet of Things in 2017 (HelpNet Security, 14 November 2016)

Philip Howard, Pax Technica: How The Internet of Things May Set Us Free or Lock Us Up (Yale 2015)

Laura James, Pax Technica Notes (Session 1Session 2Session 3Session 4)

Holly Robbins, The Path for Transparency for IoT Technologies (ThingsCon, June 2017)

Jack Wallen, Five nightmarish attacks that show the risks of IoT security (ZDNet, 1 June 2017)

Friday, February 18, 2011

Jeopardy and Risk

@Forrester's Andras Cser notes the victory of IBM's Watson computer in a TV quiz game, and asks How Can You Capitalize On This In Risk And Fraud Management?

In his short blogpost, Cser doesn't offer an answer to this question. He merely makes one assertion and one prediction.

Firstly he asserts an easy and superficial connection between the game of Jeopardy and the profession of security, based on "the complexity, amount of unstructured background information, and the real-time need to make decisions." Based on this connection, he makes a bold prediction on behalf of Forrester.

"Forrester predicts that the same levels of Watson's sophistication will appear in pattern recognition in fraud management and data protection. If Watson can answer a Jeopardy riddle in real time, it will certainly be able to find patterns of data loss, clustering security incidents, and events, and find root causes of them. Mitigation and/or removal of those root causes will be easy, compared to identifying them."

As this is presented as a corporate prediction rather than merely a personal opinion, I'm assuming that this has gone through some kind of internal peer review, and is based on an analytical reasoning process supported by detailed discussions with the IBM team responsible for Watson. I'm assuming Forrester has a robust model of decision-making that justifies Cser's confidence that the Jeopardy victory can be easily translated into the fraud management and data protection domain within the current generation of technology. (Note that the prediction refers to what Watson will be able to do, not what some future computer might be able to do.)

For my part, I have not yet had the opportunity to talk with the IBM team and congratulate them on their victory, but there are some important questions to explore. I think one of the most interesting elements of the Watson victory is not the complexity - which other commentators such as Paul Miller of Engadget have downplayed - but the apparent ability to outwit the other competitors. This ability may well be relevant to a more agile and intelligent approach to security, but that's a long way from the simplistic connection identified by Cser. Meanwhile, I look forward to seeing the evidence that Watson is capable of analysing root causes, which would be a lot harder than winning at Jeopardy.



Paul Miller, Watson wins it all, humans still can do some other cool things (Engadget 16 Feb 2011)
IBM's Watson supercomputer crowned Jeopardy king (BBC News 17 Feb 2011)

Wednesday, April 14, 2010

Enterprise 2.0 inside the firewall?

@infovark 's Dean blogs why he thinks Enterprise 2.0 will fail, and claims that the case for E2.0 inside the firewall is considerably more difficult.

I think the main problem with the case for “E2.0 inside the firewall” is the word “firewall”, which represents an outdated but still common attitude towards maintaining organizational boundaries. I wouldn’t be at all surprised if an organization that relies on firewalls struggles to get the benefits from open distributed business and technology, including Enterprise 2.0.

Dean replies
"It’s true that many forward-thinking organizations are becoming more transparent, and the borders between them are becoming less distinct. Still, eliminating the firewall altogether would require a lot of infrastructure changes. ... An even bigger challenge is the political one. Changing the Internet from a 'network of networks' paradigm to a 'unified network' approach would require far more coordination than most companies — and countries — would be willing to undertake."
I agree that shifting away from firewall-based security is a significant strategic move for an organization, not just infrastructure but also political. There are some political issues that would have to be tackled, if the organization is to achieve any potential benefits from Enterprise 2.0.

But the shift away from firewall (sometimes called Deperimeterization) doesn't necessarily entail the second shift Dean mentions, from a 'network of networks' paradigm to a 'unified network' approach, and I am not advocating this.  There will perhaps always be limits to interoperability, and there will always be some structure to the network of networks, but this structure will be more open and innovative, and not driven primarily by an obsolete security architecture.

Thursday, January 07, 2010

OWASP Top Ten 2010

@johnccr asks me to give a look to the new OWASP Top Ten 2010 RC1 (pdf), saying "it would be interesting to know if it changed your perception". So here are a few quick comments.

I'm certainly happy to acknowledge that this version makes the limitations of the Top Ten approach much clearer than previous versions, and explicitly encourages organizations to "think beyond the ten risks here". The document is careful not to claim the Top Ten as a full application security program, and warns readers not to stop at ten, because "there are hundreds of issues that could affect the overall security of a web application". But then surely this implies we shouldn't be wasting time reading this document at all; we should be reading the OWASP Developer’s Guide, "which is essential reading for anyone developing web applications today".

The status of the top ten items as risks (rather than, say, weaknesses or vulnerabilities or threats) is also a bit clearer, and the ranking of risks is based on the scale of the risk, not just the frequency of the attack. However, the document also refers to "relatively simple security problems like those in the OWASP Top 10" - which makes it seem that they may be the most obvious rather than the most problematic. Making people aware of simple problems doesn't necessarily promote awareness of more complex problems.


To my mind, the trouble with this kind of list is that it encourages bad thinking. Not only are some risks regarded as more attention-worthy than others (based on a generalized model of risk that may not be relevant to your organization or application portfolio), but each risk is considered in isolation. But a holistic understanding of security and risk needs to look at the composition of risk - how can several apparently small risks sometimes be multiplied into a very large risk.

I'm also concerned about limiting the analysis of risks to application security itself. Presumably a full security risk analysis would need to look at social attacks as well as technical attacks, but the Top Ten are all drawn from the technical side. I looked for this technical focus to be stated and explained somewhere, perhaps in a statement of scope, but couldn't find anything to this effect.


By the way, when I have raised issues about OWASP in the past, I have been challenged to fix them myself. But I'm not a normal member of OWASP, I'm an independent industry analyst who has been asked by a few OWASP members to provide coverage of OWASP. I am happy to enter into further discussions with OWASP members, but if you want me to build stuff then I am going to have to find a way of funding my time.

Should we take OWASP seriously?

Another stimulating discussion with @mcgoverntheory (James McGovern) about the ongoing OWASP project to identify the Top Ten Security Risks. I see no reason to change my previous opinion , which is that such lists are fundamentally misconceived.

As I've explained before (in this blog and elsewhere), I think the objectives of the list are muddled; I regard the methodology for producing the list as insufficiently rigorous; and I think it highly likely that the list will be widely used not as a precursor to a serious threat analysis but as a lazy substitute for it; so I just can't see that a Top Ten list is a good idea for anyone.

@mcgoverntheory replies "Many contributors to the top ten agreed that top ten lists as a concept are flawed. Its all about helping others move needle." Yes, but does it actually achieve any positive outcome? Show me.

@mcgoverntheory adds "Flawed concepts are propagated all the time. It's called marketing". But is it really the role of OWASP to be a marketing organization?

@mcgoverntheory continues "Everyone knows that Top X lists aren't meant to be complete nor necessarily measurable. Its about simple understanding". Well maybe everyone knows, but what matters is whether and how they act upon that knowledge.

@mcgoverntheory admits that "Sadly, most enterprises start and stop with awareness". Maybe so, but why should OWASP pander to this tendency?

And if OWASP is focusing its efforts on publicizing material that many contributors agree to be flawed, why on earth should industry analysts take OWASP seriously? Does OWASP want to be taken seriously?

Maybe it doesn't. @mcgoverntheory asks "What lift would analysts provide to OWASP? No products to sell and therefore we won't show up in quadrants or hype docs."

Of course, that depends what kind of industry analysis we are talking about. Some so-called industry analysis firms seem to do little more than reprocess and amplify the efforts of the software industry marketing departments, putting favoured products and vendors into a Magic Sorting Hat. Or they write like a theatre critic who gets invited to the previews, always finds something positive to say about the latest production, which can then be quoted on the play's website.

But I hope OWASP isn't the kind of organization that only wants analysis on its own terms, and understands that the value of industry analysis comes from the different perspective an analyst should be able to offer. In which case, I am happy to talk.

Friday, January 09, 2009

OWASP Top Ten - Update

OWASP is the Open Web Application Security Project. It is perhaps best-known for publishing Lists of the Top Ten (or more recently Top Twenty-Five) Security Bugs (or Vulnerabilities or Threats or Risks).

Following my earlier post on the OWASP Top Ten, as well as an exchange of emails with someone in the OWASP community, I posted the following question to the OWASP discussion group on Linked-In.

Do Top-Ten Lists distract from a holistic approach to security?

If you ask people to pay attention to the top ten items in a list of threats or vulnerabilities, they will almost inevitably pay less attention to other things. (Intelligent people are aware of the limitations of lists, but even they are not immune to such effects.)

If a security vendor has a particular interest in one item - for example selling protection or detection for a particular threat - then there may be some commercial significance in whether that item makes the top ten or not. So a commercially minded security vendor will look for ways of influencing (aka distorting) the top ten list in his favour.

Meanwhile, intelligent attackers may calculate that a significant portion of security dollars will be consumed by the top ten, leaving other vulnerabilities under-funded.

The OWASP website does contain a page (Where To Go From Here) explaining that the top ten list is only the starting point of a proper security analysis, but this page is very poorly signposted and I suspect that many people never reach this page.

The official purpose of the OWASP list is to educate people about the consequences of security vulnerabilities. But I think there is a broader education purpose, and I fear that top ten lists distract from this purpose.

This prompted a couple of interesting responses, expressing different views on the real purpose of the OWASP Top Ten. Michael Vance said that the items in the top ten list are those most likely to occur or those that are most likely to have the greatest impact. Christian Frichot said that lists are good at removing the low hanging fruit: I interpret this as meaning the most obvious and easiest to fix, which is not necessarily the same as frequency or impact.

In any case, the methodology for creating the OWASP top ten list does not seem to be designed to produce a list with the characteristics required by either Michael or Christian. It is partly based on historical data (frequency but not impact or low-hangingness, as far as I can see), but with some adjustment to allow for some future projections of increased risk. For example, one issue (CSRF) was promoted to the list because the team believed it to be important, but with no evidence produced to support this belief. So is the OWASP Top Ten List really based on a systematic assessment of (generic) likelihood and impact?

In any case, it would be strange if the same list were equally relevant to all applications in all organizations. Do we expect a retail bank to have the same security risks as a nuclear power plant? Do we expect an airline to have the same security risks as an online bookstore?

Clearly it would be stupid to rely completely on the Top Ten List - although I suspect that some people do just that. But my question is more fundamental - what are the grounds for thinking that a top ten list improves the overall process, rather than just adding a redundant step into the process? Christian's argument is interesting - by dealing quickly with the easy and obvious generic vulnerabilities, we can spend more time on the specific ones. But is that what people actually do?

Michael acknowledges that there is a significant disconnect between the way that Top Ten (and Top 20 and Top 25 and even Threat Classification) lists should be used and the way that they are used. He mentions a specific concern that this list will be misused by being improperly inserted into procurement language.

If OWASP were merely an academic organization, it could deny responsibility for how other people use their lists. "We produce the perfect lists, it's not our fault if people abuse them." But if OWASP is trying to make a real practical difference to security, then the actual effects and effectiveness of these lists is important.

Meanwhile, I am happy to see that other security experts agree with my concerns. Gary McGraw (CTO of Cigital) has just published an excellent article called Software [In]security: Top 11 Reasons Why Top 10 (or Top 25) Lists Don’t Work (via Bruce Schneier).


Update (March 2009)

Tom Brennan has just posed a question on the Linked-In discussion: "So what OWASP project are you going to start that will change this?" So the way to influence existing projects within OWASP is to start a rival project is it? What a strange organization!


Related posts: OWASP Top Ten (October 2008), OWASP Top Ten 2010 (January 2010), Low-Hanging Fruit (August 2019)

Thursday, October 23, 2008

OWASP Top Ten

Back in August, James McGovern asked me to provide some OWASP coverage. Someone called Jennifer (Bayuk perhaps?) added a comment

OWASP is not dominated by commercial interests, and so the message is different than from product vendors (and service vendors too, to a lesser extent). When an automated tool vendor claims to "address" the OWASP Top Ten, they should be ashamed of themselves. And you should be ashamed if you're buying that hype and promoting automated tools as anything much more than an interesting distraction. Covering OWASP would allow people to get a far less biased opinion of what's going on in application security.


Okay, let me start from that point. The OWASP Top Ten Project periodically publishes a "Top Ten" list of the most common web application security vulnerabilities. The official purpose of this list is to educate people about the consequences of these vulnerabilities.

But of course the inevitable effect of publishing a Top Ten list is pretty obvious - it causes people to pay particular attention to the items in the top ten, and considerably less attention to the items that don't quite make the top ten. If I was a niche security vendor, I'd be lobbying extremely hard to make sure that the particular vulnerability addressed by my product got into the top ten. Conversely, if I were running a criminal scam, I know exactly which vulnerabilities I'd be targeting.

This kind of thing clearly distracts people from a proper holistic view of application security. In my view it is the Top Ten List itself that is the "interesting distraction" Jennifer talks about, and I think OWASP should quietly drop this kind of cheap journalism and concentrate on educating people to do security properly. There is a lot of more intelligent stuff on the OWASP website explaining where to go from here, but I wonder how many people get that far?


Never let it be said that I am just a passive critic, however. Back in August, I registed onto the OWASP wiki and posted a couple of helpful questions about the OWASP principles. Haven't had a response yet, but I live in hope.



See also

OWASP Top Ten Update (January 2009)
OWASP Top Ten 2010 (January 2010)

Tuesday, August 12, 2008

OWASP Coverage?

In a comment to an unrelated post, James McGovern asks

"What would it take for an industry analyst to provide comprehensive coverage via blog entries on the work that OWASP is doing?"

I can't speak for anyone else, but here's my answer. I might provide occasional comments about OWASP without any special motivation, but before I go to the trouble to provide comprehensive coverage about something, I need to see some strong interest from my readers. I also need to feel that this is a subject I can add some value to, rather than merely repeating what everyone else is saying.

So if anyone wants me to take a thorough look at OWASP (or anything else for that matter), please add a comment to this blog, indicating the nature of your interest and what specific questions you'd like me to address. Thanks.

Sunday, May 18, 2008

Guardian Angel

From a recent US patent application
An intelligent personalized agent monitors, regulates, and advises a user in decision-making processes for efficiency or safety concerns. The agent monitors an environment and present characteristics of a user and analyzes such information in view of stored preferences specific to one of multiple profiles of the user. Based on the analysis, the agent can suggest or automatically implement a solution to a given issue or problem. In addition, the agent can identify another potential issue that requires attention and suggests or implements action accordingly. Furthermore, the agent can communicate with other users or devices by providing and acquiring information to assist in future decisions. All aspects of environment observation, decision assistance, and external communication can be flexibly limited or allowed as desired by the user.
Twenty investors are listed, including Gates, William H. (Medina, WA) and Ozzie, Raymond E. (Seattle, WA). The presence of these two names on the patent application is attracting some attention from the blogosphere.
  • a most unusual Microsoft patent application that should intrigue privacy advocates [TheoDP]
  • interesting and frightening at the same time [Dennis Kudin on security]
  • This sounds interesting at first glance, but also a little creepy. ... I'm not so sure I'd be terribly keen on having my device capable of some of those functions. [PDAPro.info]

There is some discussion in the comments to Bruce Schneier's blog about the extent of Bill's and Ray's contribution to this invention. Maybe it's true that Bill and Ray can attach their names to pretty much any Microsoft patent application if they choose. In which case the interesting question is what it was about this particular invention that attracted their interest. 

The name Guardian Angel is leading some commentators to view this as a security mechanism, but it is clearly intended to provide much more than security, a comprehensive mechanism to provide presence and context, which are key elements of some of the things both Bill and Ray have talked about in the past. 

There is also some discussion on Bruce's blog about the originality of the invention and the possibility of prior art. You really can't tell this from the summary though; to assess this properly, you would need to look at the whole application including the diagrams, but I haven't managed to access the diagrams. Clearly there are other companies working on mechanisms for presence and context, including the telecoms companies. I had a briefing on this very topic with Avaya recently. See my notes on Presence 2.0.

 

See also: What does a patent say? (February 2023)

Tuesday, August 28, 2007

Skype Skuppered

It turns out that it was Microsoft that brought down Skype for two days earlier this month. Microsoft's monthly software update (known as Patch Tuesday) triggered millions of computers to reboot at the same time, which always puts an unusual strain on major Internet companies such as Skype.

As Alex from RiskManagement Insight points out, this is equivalent to a form of DDOS (distributed denial of service) attack. From a risk management perspective, it may not matter very much whether an attack is deliberate and malicious, or merely an accidental side-effect of some entirely innocent action.

Although Skype had survived previous Patch Tuesdays without incident, it seems that this month's Patch Tuesday triggered a previously unknown bug in Skype's software. As Alex points out, it is practically impossible to construct a test environment large and complex enough to simulate this scenario.

I haven't seen any figures, but I have little doubt that Skype's competitors (including Microsoft) must have experienced an unusually high level of new registrations during Skype's misfortune. Now that we have become accustomed to free voice calls over the Internet, it seemed outrageous to return to the almost mediaeval practice of paying real money for talking over the telephone, so my colleagues and I signed up to Yahoo Messenger.

It's an ill wind ...

Friday, May 11, 2007

IT Security Industry

Lots of people (e.g. Gunner Petersen, Pete Lindstrom) are attacking Bruce Schneier for asking Do we really need a security industry?

Obviously Bruce doesn't expect the IT security industry to disappear any time soon. He points to some of the structural reasons for the economic viability of stand-alone products and services for IT security (including legal liability - or the lack of it), as well as the vested interests of software companies.

In some ways, the global security situation is getting worse with the increasing fragmentation of functionality and responsibility, and the increased interconnectedness of human and automated systems. This phenomenon isn't just an IT problem: it exists in other domains as well.

Bruce's argument is that security should be (increasingly) embedded into the infrastructure. This is the logic underlying the acquisition of Bruce's own company by BT last year. (See my comment: BT enters the Blogosphere.)

Pete is scornful, and and there are some similar comments on Bruce's own blog:
"The notion of 'natural' security in the face of an intelligent adversary is so fundamentally ignorant that the whole thing must be a charade. It isn't even a pipe dream - it is an impossibility. Throw in the fact that IT resources are increasing in value and function and there is no doubt of that impossibility."
Gunnar's criticisms are more moderate. He also questions the notion of natural security, but acknowledges the problems with the present situation:
"The way the IT security industry is presently constituted, is not effective, focuses WAY too much on network security instead of app and data security, and is incredibly reactive and tactically focused."
For my part, I think it's always useful to ask provocative questions. Questions like "Do we really need X?" (or the equally provocative "Does Y matter?") shouldn't be dismissed with a simple Yes/No answer. Such questions call for an exploration of the true actual or potential value of X and Y, and perhaps a search for better (more innovative, more intelligent) alternatives to the current state-of-the-art.

Do we need an IT security industry? Probably yes, but not the one we've got at the moment.

Sunday, February 11, 2007

Problem-Solving

There are two contrasting patterns of problem-solving behaviour in the software industry.
  • Solving problems on a one-off basis
  • Solving an entire class of problems in a single move.
Many of the important innovations in software have resulted from successfully tackling major classes of problem rather than isolated instances. And there are many people in the software industry for whom this way of problem-solving has become an ingrained habit.

I therefore find it odd that some classes of recurring problem continue to be tackled on a one-off basis. For example, the industry still doesn't seem to have found a reliable way to eliminate software code "overflows" - even though this is a regular cause of software bugs and security vulnerabilities.

Another common example of this pattern occurs in user support. When a user reports a problem, this probably indicates that a number of other users have a similar problem. And it is probably not good enough to fix the problem only for the users who report the problem. In fact it may be more important to fix the problem for those users who haven't noticed that there is a problem at all.

But if the response is to solve the problem as if it belonged to a single user, then this seems to deny the existence of a broader problem.

Take blog feeds for example. A couple of times recently I've noticed problems with blog feeds, and I've gone to the trouble to notify the blog author. What I'd expect the blog author to do is fix the feed. What happens instead is that the blog author sends me back a helpful email telling me how to redirect my newsreader. Actually I can work that out for myself thanks.

Perhaps some blog authors assume that their subscribers are all fluent in RSS. Because I'm the one identifying a problem, they might imagine I am positioning myself at the incompetent end of the spectrum. And the problem is my problem.

Actually, it's precisely because I'm not at the incompetent end of the spectrum that I can see there is a problem. And it's not my problem if the blog author loses some of his subscribers because his feed is broken. It's his problem.

Tuesday, May 03, 2005

Jericho

Fortress Security

Back in 2002, Aidan Ward and I wrote some reports for the CBDI Forum on Web Services Security, which among other things lay siege to the Fortress Model of security. We were ahead of our time. The Fortress walls are not crumbling yet, but we are now joined by some serious allies.

See also brief note on Autonomous Computing: Fiefdoms & Fortresses

Jericho Forum

Jericho Forum (part of the Open Group) is a non-profit security standards group, led by user organizations. This is leading the push towards more agile and interoperable security models. 

Press Release: Executives Agree that Interoperability, Deperimeterization of Data and Horizontal Integration Are Essential (April 2004) News Story: New boundaries and new rules (John Sterlicchi, SC Magazine, Jan 2005) News Story: Vendors line up to see Jericho vision (Ron Condon, SC Magazine, Feb 2005) News Story: The Future of IT Security is Fewer Walls, Not More (Dan Ilett, ZDNet, April 2005)

dePerimeterization

This essentially means tearing down the Fortress model. Definitions: Whatis.com, Word of the Day

Security Vendors

nCipher Cryptographic IT Security See press release (April 2005), on joining the Jericho Forum.
Vordel XML Web Services Security See weblog postings (March 2004, July 2004) by CTO Mark O'Neil


CBDI Forum

Web Service Security (CBDI Journal, January 2002)

Component-Based Security for Web Services (CBDI Special Report, July 2002)

Agile Security for SOA (CBDI Journal, June 2005)

Thursday, October 21, 2004

Consolidation

Several commentators see the recent merger of Actional and Westbridge as a harbinger of industry consolidation among web service players.
  • David Sprott (CBDI Forum) discusses whether the industry is now "Crossing the Chasm".
  • Phil Wainewright (Loosely Coupled) thinks we are now entering the "Acceleration" phase. (He tells us we're still some way short of the "EndGame" phase, and then provides evidence that this phase has already started!)
From an economic perspective, this merger can be understood as a response to a given set of economic forces. In general terms it is easy to predict more mergers, and we may be able to identify likely targets by looking at such economic indicators as revenue, growth, funding, cashflow, cashburn and so on.

This merger can also be understood from a technological perspective, as an statement about the viability of stand-alone security products. The technological logic of the merger is to integrate security products into management platforms. The same logic can be seen in CA's acquisition of Netegrity. It is also implicit in IBM's Tivoli brand, which covers a collection of security and management products.

This reflects a growing recognition of the complexity and dynamic nature of security requirements. While many stand-alone security products do an excellent job at guarding against a specific class of threat, what is needed is an agile security architecture capable of rapidly mobilizing a range of effective responses to newly emerging threats.

Let's return to the economic perspective. Security attacks are designed to achieve the maximum penetration for the minimum effort. Many attackers are motivated not by technical ego but by results (criminal or political). If a given style of attack ceases to be effective, we can expect a new style of attack to appear. If the attackers are more agile than the defenders, then this gives them a natural advantage. Against an agile attacker, it is not wise to invest all your resources in fixed defences.

Thursday, September 16, 2004

Security Note

Microsoft has announced a critical vulnerability in Windows, which allows malicious code in JPEG files to be executed.
Source: BBC News

Like many security problems, this arises because of a failure of encapsulation. With a reasonable architecture, your photos could contain all sorts of secret messages and malicious code but these would not leak out. The software platform would only execute the code inside some sort of sandbox. But I don't want to have to go to this trouble. The problem only arises because someone had the clever idea that JPEG files could contain code, and programs reading JPEG files would execute the code. (JPEG is an industry standard: we can't blame all this on Microsoft.) That clever idea only works safely if we assume a much more sophisticated sofware architecture and an much higher level of software quality than we are likely to see in the foreseeable future. Otherwise, such clever ideas are dangerous.  

Lesson One: Clever ideas often increase complexity, and have a negative impact on security. If even an innocent JPEG file can be crawling with malware, what are the implications for advanced middleware, such as web services? SOAP messages can carry all sorts of payloads, including compressed, fragmented and encrypted ones. An XML document can contain data or code, and the code can be in any language you choose. We know that passenger frisking and baggage screening doesn't always detect weapons, so how do we expect a firewall to detect dangerous data packages? The firewall (and the fortress model which depends on it) are made irrelevant by these advanced technologies.  

Lesson Two: If we are using open distributed technologies, we must expect security to be managed in an open and distributed way, not by building a false illusion of central control.

more

Wednesday, July 23, 2003

Bleak Future of the Internet?

Many of us have become dependent on the Internet for personal and business communication. So it is a matter of some concern to see how the Internet - especially email - is filling up with rubbish.

Innocent newsletters are getting caught in email filters, and newsletter senders are finding this increasingly frustrating.  David Sprott of CBDI devoted his July 10th 2003 newsletter to this topic, and Bruce Schneier (CryptoGram) picked up the topic again in his newsletter of July 15th 2003.

Filters may be locally effective - and this encourages some complacency. But the filters are generally ineffective, and generate significant levels of false positives.  Furthermore, the existence of filters simply encourages the producers of rubbish to increase their production volumes, at little cost to themselves, in order to maintain the desired levels of dissemination. They are therefore counterproductive for the Internet as a whole.

While many individuals and small businesses have become dependent on the internet, there are growing numbers of old-economy firms where the nuisance and risks of connection to the internet may be perceived to outweigh the advantages. It it may be hard to continue to justify open access, and many firms may be tempted to disconnect themselves from the internet altogether.

Even in the largest firms, there will always be individuals and groups who want to remain connected to the internet for various reasons - including marketing and R&D groups. But the corporate interest may prevail - and it may be a constant effort to keep the lines of communication open.

This scenario should be extremely worrying for decent small firms - as well as large media empires  - whose business depends on proper use of the internet. We are currently talking to a number of media and technology firms, to prepare contingency plans against this scenario.

Wednesday, December 18, 2002

Autonomous Computing - Fiefdoms and Fortresses

Pat Helland of Microsoft has proposed the Autonomous Computing model as an application design pattern for cooperation between independent systems that do not trust each other. It has two key notions.

Fiefdom An independent computing environment that refuses to trust any outsiders and maintains tight control over a set of mission critical data 

Emissary A computing component that helps prepare requests to submit to a fiefdom. It operates exclusively on published (snapshot) reference data and single-user data. 

Helland uses the autonomous computing model to explain many of the new types of applications including offline apps, scalable web-farms, B2B apps, content syndication and content aggregation. (How secure are these then?) more

Roger Sessions of Object Watch has combined the Helland model with other elements to produce an elaborate Fortress Model of computer security. A fortress is a self-contained software system, contains business logic (grunts) and private data (strongboxes), and is surrounded by an unbreachable wall. Communication with the outside world passes through a drawbridge, and is controlled by guards and by treaties with allies

I have many reservations about these models. Here are three to be going on with. 

  • Reliance on an absolute, binary notion of trust. Anything or anybody inside the wall is trusted absolutely, anything or anybody outside the wall is mistrusted. 
  • Reliance on simple topology. A wall creates a simple enclosed space, a straightforward boundary between inside and outside. 
  • Reliance on technology. The fortress model depends on firewalls and other security mechanisms. 

 


Pat Helland, Autonomous Computing paper and blogpost (updated December 2020)

Roger Sessions, The Software Fortress Model: A Next Generation Model for Describing Enterprise Software Architectures (Object Watch Newsletter 36, 17 November 2001)

Richard Veryard and Aidan Ward, Web Service Security (CBDI Journal January 2002)


Originally published at http://www.veryard.com/sebpc/security.htm#autonomous

Links updated March 2022 to include Pat Helland's new version