Gain Top-Notch Cyber Security Skills at SANS Pittsburgh 2018. Save $200 thru 6/27.

SANS Security Trend Line

Ramblings On Risk - Part II

In Part I, I explained why I have always trashed the traditional risk equation of the form Risk = Probability of event * Cost/impact of event. I've pushed an alternative, simplified form of Risk = (Threat * Vulnerability) + Action. Here's where that comes from:

I've always been a fan of the Common Vulnerability Scoring System. It is widely used by vendors who release patches (other than Microsoft?) and by the various services that release information on vulnerabilities. It provides a simple model for external services to score vulnerabilities with a base score of Exploitability and Impact, and an initial Temporal score that describes the availability of active exploits, fixes/workarounds and the difficulty of verifying whether or not you actually have the vulnerability.

CVSS also provides an Environmental Score metric, which allows you to use a standard methodology for tailoring the impact estimate to your particular organizational realities. Over time you can also easily adjust both the Environmental and Temporal metrics as conditions change. There are many free tools that allow you to calculate CVSS scores.

CVSS Tool Example

By monitoring vulnerability and threat information feeds and using CVSS as a simple scoring mechanism, in one fell swoop you can easily calculate the (Threat x Vulnerability) term in a repeatable, justifiable manner and use that to rank risks. This works even for vulnerabilities that don't directly tie to a vendor patch or software vulnerability report that comes with a CVSS scoring. For example, I've seen CVSS used to estimate an organization's risk level of phishing attacks, with the General Modifiers under the Environmental metric adjusted depending on the results of internal self-phishing tests.

What this approach lacks is the creation of any monetary estimate of impact. In Part 1 I said "To me, the value of any risk estimation exercise is to product a meaningful way to prioritize action - what risks should I address first and I keep working my way down the list until I run out of resources." I've literally (I mean literally literally) never seen a meaningful a priori dollar estimate of the impact of a successful attack outside of areas (such as in manufacturing) where the cost of downtime has been well established. But outside of manufacturing, even estimates of downtime have been nearly useless — years ago then-eBay CEO Meg Whitman felt a massive DDoS attack cost eBay a few thousands of dollars, while external estimates were in the hundreds of millions.

Meaningful also means regular, repeatable, justifiable and affordable — those $$ estimates are rarely any of those 4. Now, there are times when some dollar estimate is absolutely required - but I think in reality those times are few and far between.

Finally, I added that Action term. Adding a constant is a tradition in the creation of static equations to try to model very dynamic universes — see Einstein's cosmological constant. The Action term is there to allow you to rapidly move a risk up or down the ranking stack by adding/subtracting pre-defined values.


  • When some regulatory body puts out an alert on a vulnerability or threat you scored low, or there is some press hype, and the CEO/Chief Legal counsel starts asking about it. ++
  • One of your peers gets attacked (thanks, Target!) and you want to jump on that hype to justify some business disruption to clean up a bunch of medium level risks. ++
  • You know that some near term future action, like a data center upgrade or coming merger/acquisition, is going to take care of this risk and you want to move it down the stack. —
  • You know that some near term future action, like a data center upgrade or coming merger/acquisition, is going to make this risk a bajillion times worse, and you want to move it up the stack. ++

There's no such thing as the perfect risk equation but, just as in most areas of security, the elegant is the enemy of the useful.


Posted March 17, 2014 at 3:03 PM | Permalink | Reply

Marc Ruef

Nice two-parter! I'm one of the mods of scip VulDB, a vulnerability database:
I've pushed CVSSv2 for our database and during security testing because I think it provides a good level of measurement (especially the Temporal scores are nice). Even though I'm not always happy with it (see
But we ran into problems the moment we wanted to use CVSSv2 for isolated weaknesses in source code, which require some kind of luck to exploit them or which require manipulation by one of the developers. And if we want to classify something that might prepare an exploitation but doesn't exploit it directly (in this case CI/II/AI are N, which leads to a Base score of 0.0).
This let me understand that CVSSv2 isn't always capable of handling some kind of "undefined" situations. How do you approach them?

Posted March 18, 2014 at 6:17 PM | Permalink | Reply

John Pescatore

I don't think there is any meaningful way to have one way of ranking every possible form of threat or vulnerability in one methodology. You've pointed out one area that CVSS doesn't necessarily cover well, leading to a base score of 0. Insider threats are another area that sort of go in the other direction, always scoring 10.
By meaningful, I mean useful. And something that really only covers defined situations can actually be really, really useful. This is sort of the basic mantra behind the Critical Security Controls: this may not solve 100% but if you focus 20% of your efforts here you reduce your problems by 80%.
One way deal with conditions such as you describe is to assume Luck or developer manipulation has happened ''" the prep phase is over, the vulnerability now exists. This is sort of like cigaratte smoking or obesity ''" it doesn't kill you immediately, we can't really say when you will die but we can say your lifetime is reduced by a predicable value because you are "preparing" the vulnerability!

Post a Comment


* Indicates a required field.