Opinion: vulnerable-by-default isn’t working out

Archive Replay (Original posting 


Summary: Collected views, thoughts and opinions on the impact of the vulnerability-centric model in infosec.


This was a pretty decent rant making the rounds at one time. The author of the post ” The old speak: Wassenaar, Google, and why Spender is right  commenting on the responsible disclosure debate and so on, tells us,


both sides of the disclosure fence suffer from one fatal flaw. A flaw that Brad Spengler AKA Spender has been incessantly pointing out for years and it’s that bugs don’t matter. Bugs are irrelevant. Yet our industry is fatally focused on what is essentially vulnerability masturbation.”

“At the end of the day my team, Google’s team, and lots of people’s teams are rooted in a culture of vulnerability masturbation. it’s all bullshit. If you care about security that is.”
“But if you care about systemic security….then you will stop circle jerking over individual vulnerabilities and listen to what Spender has been saying for years.”

“Which is: you don’t chase and fix vulnerabilitiesyou design a system around fundamentally stopping routes of impact.”


That’s pretty direct. You know what they say about that kind of behavior, right…. Oops! Too late, looks like infosec has been blind to this reality for a while! One sees more of these “what we’re doing isn’t working” rants and calls for change lately.

Debates about responsible disclosure and the timing of such, as well as the potential “chilling effect” of Wassenaar and the CFAA on security researchers and bug hunters actually side-step and dance around the real issue.

And that is…

… that the vulnerability-centric model is simultaneously the central pillar and Achilles heel of infosec!


Achilles was a legendary, warrior figure who was supposedly invulnerable to injury, until he suffered a fatal injury to his one vulnerable weak point, his heel. An Achilles heel is a weakness in spite of overall strength, which can actually or potentially lead to downfall. While the mythological origin refers to a physical vulnerability, idiomatic references to other attributes or qualities that can lead to downfall are common.

Software vulnerabilities are a sub-set of the total vulnerabilities that add risk, but a major one. There’s on-going huffing and puffing about vuln disclosure and vendors and bug researchers getting their toes stepped on, but if one steps back and looks at the big picture, you might wonder if what they are fighting over is really even working.

I’ve poked at vuln-by-default in a few posts before. Some of it bears repeating.

Tal Weiss discusses some  bug issues here:

“… interactive debugging tools don’t provide much help once a program has moved from the development and testing stage to “real world” servers…it’s not always possible to find bugs in advance. Many problems in modern applications are caused by stuff that its developers didn’t actually build …. No software is an island … You’re depending on code that’s being maintained by other people, such as third party software and APIs. Someone else, a partner or someone in another department of your company, may change something and it will break your system … And of course, when you’re moving fast and breaking things, you can always expect to find many bugs that slipped by during testing.”

As I wrote about here, Michael Locasto informs of the impact of dependence on vuln-by-defaultin Reflections on Re-Balancing the Attacker’s Asymmetric Advantage.

The core of the asymmetric model of cybersecurity is the focus on thedistribution of vulnerabilities. Accepting this truth leads to the recognition of a fundamental imbalance in effort between attacker and defender.”


the community is playing a game we are destined to lose: attackers will always have quantitatively less work per unit time and more operational leeway than defenders.”


“Like all hard truths, it is difficult to accept that we are engaged in an ultimately useless endeavor, and so we persevere in incremental improvements that do nothing to change the underlying structure.“


Having to deal with all those vulnerabilities translates into requirements for skilled personnel, as Martin Libicki of Rand points out here,

We have a model that basically says ‘I accept the world of software as is and I am going to patch everything at a systemic level,’” he said. It is an approach that is basically unsustainable in the long term. A company that has 600 security professionals today might require 1,000 in a few years — and still not be secure.

Of course, if everybody from the janitor, and up, needs to get their blackbelt in IT security, more layers of security will be added, which will mean more complexity and more attack surface vulns. It’s sort of a vicious circle.

Dan Geer adds more about the tie between dealing with vulns and skill requirements here:

” I submit that polarization has come to cybersecurity. The best skills are now astonishingly good while the great mass of those dependent on cybersecurity are ever less able to even estimate what they don’t know, much less act on it. Polarization is driven by the fundamental strategic asymmetry of cybersecurity: the work factor for the offender is the incremental price of finding a new method of attack, but the work factor for the defender is the cumulative cost of forever defending against all attack methods yet discovered.”

” I don’t see the cybersecurity field solving the problem because the problem is getting bigger faster than we(here) are getting better.”

Note what Gene Spafford has to say about patching in Patching is Not Security:

I have long argued that the ability to patch something is not a security “feature” — whatever caused the need to patch is a failure. The only proper path to better security is to build the item so it doesn’t need patching — so the failure doesn’t occur, or has some built-in alternative protection.

Yes, all vulnerable-by-default roads eventually lead to patching, which is a problem in itself. A fully patched system is not necessarily a secure system; it’s hopefully a less vulnerable system, maybe. How much of infosec effort focuses on vulnerability management and patching? It’s a time and resource sink, and it looks like overall, things are still getting worse! I’m not sure of what percentage of resource spend vulnerability management consumes. It’s a big chunk, but does it work? No one really looks at alternative protections either, because that would be hard!

Dave Lewis expresses more of Spaf’s thinking here,

“We have a predisposition to patch things as opposed to take the time to invest in addressing long term issues. Why do we continue to develop systems that lack proper security controls? Spafford puts it simply, “There are no consequences for sloppy design.” He’s not wrong as it applies to design. The consequences fall to those who have to support the systems as opposed to the designers. This is a process flow that is designed to fail from the outset.”

Additional considerations

I don’t have the tweet but I remember (because I still chuckle over it) when Kurt Wismer said something  like,

“Congratulations if you’re a security researcher who has found a vulnerability. There are now infinity minus one.”

Dan Geer, looking at metrics for bugs says here,

“Bruce Schneier asked a cogent, first-principles question: Are vulnerabilities in software dense or sparse?” If they are sparse, then every vulnerability you find and fix meaningfully lowers the number of vulnerabilities that are extant. If they are dense, then finding and fixing one more is essentially irrelevant to security and a waste of the resources spent finding it. Six-take-away-one is a 15% improvement. Six-thousand-take-away-one has no detectable value. Eric Rescorla asked a similar question in 2004: “Is finding security holes a good idea?” Rescorla established that it is a non-trivial question, as perhaps confirmed by our still not having, The Answer.”

If his point is that we don’t really have the answer, why is infosec operating on the assumption that we do? Or is it just the only game in town? We make one app safer, but there are 50 others that are buggy, any one of which can serve as a pivot point.

Pete Lindstrom chimes in with an interesting point looking at the economic point of view in his post, “How “Stronger” Software can lead to Higher Risk”.

Engineers are focused on quality, so when they hear about vulnerabilities in software, their immediate reaction is to want to fix them… all of them.”
Economists, on the other hand (get it?), look at cause and effect, actions and reactions, and, most importantly, outcomes. The root of the economic problem lay in the ultimate unwanted outcome – the breach.Economics-oriented security pros understand that everything we do is intended to thwart the breach. …The economist knows that fewer vulnerabilities is not the ultimate objective. The ultimate objective is to reduce the likelihood of an incident.”


“That is the key observation for this discussion – a breach requires both an attacker (threat) and a target (vuln), which manifests itself in the form of a connection between source and destination.”


” Even though the population of targets may be reduced (perhaps even significantly so), if the threat is sufficiently motivated, more connections can be made with the vulnerable targets. The only way to guarantee reduced risk is to bring one of the populations (most likely the vulnerable targets) to zero. History shows us this is not likely with commercial software in enterprises.”


” And there you have it – given the need for both threats and vulnerabilities, the reduction in one doesn’t force a reduction overall. And if the other element is increased in the process, the marginal difference in each population must be evaluated to truly understand the impact.”

Okay, who can bring the vulnerabilities variable to zero, especially if one considers the previous point about vulnerability density? Few talk about the connection between threat and vulnerability but the view of Trustifier is that since no one seems to be able to perfect consistently secure coding, controls should try and disrupt the vuln-threat connection. That is, make it so threats are unable to exploit existing vulnerabilities, even without patching. This has been the purposed goal of Trustifier KSE (Kernel Security Enforcer).

In his recent article, “Why Do We Care About Zero Days? Michael Roytman connects more dots:

“Armed with the magical powers of this data, a good engineer could tell you exactly which of your vulnerabilities to fix, what the timelines should be based on what’s being attacked, and what’s okay to ignore. You’d fire off tickets to your dev teams, and, with some luck, in about ten years you’d be done.”

“But zero day vulnerabilities point out the assumption in our entire security process. We assume we know which vulnerabilities are out there, and when we don’t we wait for a CVE to come out, a scanner to pick them up, and/or our engineers to find out information about them.”
“Because they’re a symbol of the inherent assumptions and failures that our vulnerability management systems make. They are Rumpsfieldian unknown unknowns. In terms of business risk and business process, they are the impossible.

When 79% of companies release apps with known vulnerabilities can we really put much faith in the model? As was just recently asked in a recent Black Hat keynote, Should Software Companies Be Legally Liable For Security Breaches? Which next huge breach will break the liability camel’s back? What about security vendors? Are they doing a good job? Gary McGraw tells us here, that security offerings are only adding to the attack surface, because vendors are doing a poor job at software security! If security vendors can’t manage software quality, why do we expect others to? It’s not even their core competency.

The reason I zeroed in on the rant/post at the top of the post is that I had just finished reading Kim Zetter’s book “Countdown to Zero Dayand was at the time, already struck by the frankness of the following related passages (~p. 220):

“One code warrior employed by a government contractor … described a bit of the work he did as part of a team of five thousand….”

“The company gave him a list of software programs they wanted him to hack, and he quickly found basic security holes in all of them. His group, he said, had a huge repository of zero-day vulnerabilities at their disposal–”tens of thousands of ready-to-use bugs”in software applications and operating systems for any given attack. “Literally, if you can name the software or the controller, we have ways to exploit it,” he said. Patched holes don’t worry them, because for every vulnerability a vendor fixed, they had others to replace it.

She quotes Andy Pennington of K2Share and with a former background in US government security.

“But it’s a government model that relies on keeping everyone vulnerable so that a targeted few can be attacked….We’re putting millions of dollars into identifying vulnerabilities so that we can use them and keep our tactical advantage.”

She adds, “Pennington acknowledged there were “competing interests” in government when it came to the vulnerability issue.”

No kidding. Is anybody taking into account the damage and costs of keeping everyone vulnerable? I’m guessing business and the public has to bear those costs. So what if a few citizens have to suffer some hits to their livelihood? It’s for their own good. We can still hack our enemies.

Sure, aiming for better quality software is always a good idea. Keep in mind, that without trusted execution environments, it has to be nearly perfect. Is it possible to get close? Are industry expectations realistic? Is there any point in even talking about value? I’ve met a number of software folks and they have been really smart, dedicated folks, but I just wonder if vuln-centric, and the risk model it is a major part of, will be able to deliver the goods? Based on all the ideas presented in this post, is it a sure thing? We’re probably better off for it all than if we didn’t, but one has to question its value as we follow the continuing breach parade.

Is the whole thing one big money pit?



You may believe that finding a vulnerability or bug is getting the ball closer to the hole, but how many adversaries/parties (governments, management, and contractors), are moving the hole further from the ball?



Custom art by Scott Lewis




No, You Really Can’t

79% of companies release apps with known vulnerabilities

Should Software Companies Be Legally Liable For Security Breaches?