In Part I, I explained why I have always trashed the traditional risk equation of the form Risk = Probability of event * Cost/impact of event. I've pushed an alternative, simplified form of Risk = (Threat * Vulnerability) + Action. Here's where that comes from:
I've always been a fan of the Common Vulnerability Scoring System. It is widely used by vendors who release patches (other than Microsoft?) and by the various services that release information on vulnerabilities. It provides a simple model for external services to score vulnerabilities with a base score of Exploitability and Impact, and an initial Temporal score that describes the availability of active exploits, fixes/workarounds and the difficulty of verifying whether or not you actually have the vulnerability.
CVSS also provides an Environmental Score metric, which allows you to use a standard methodology for tailoring the impact estimate to your particular organizational realities. Over time you can also easily adjust both the Environmental and Temporal metrics as conditions change. There are many free tools that allow you to calculate CVSS scores.CVSS Tool Example
By monitoring vulnerability and threat information feeds and using CVSS as a simple scoring mechanism, in one fell swoop you can easily calculate the (Threat x Vulnerability) term in a repeatable, justifiable manner and use that to rank risks. This works even for vulnerabilities that don't directly tie to a vendor patch or software vulnerability report that comes with a CVSS scoring. For example, I've seen CVSS used to estimate an organization's risk level of phishing attacks, with the General Modifiers under the Environmental metric adjusted depending on the results of internal self-phishing tests.
What this approach lacks is the creation of any monetary estimate of impact. In Part 1 I said "To me, the value of any risk estimation exercise is to product a meaningful way to prioritize action - what risks should I address first and I keep working my way down the list until I run out of resources." I've literally (I mean literally literally) never seen a meaningful a priori dollar estimate of the impact of a successful attack outside of areas (such as in manufacturing) where the cost of downtime has been well established. But outside of manufacturing, even estimates of downtime have been nearly useless — years ago then-eBay CEO Meg Whitman felt a massive DDoS attack cost eBay a few thousands of dollars, while external estimates were in the hundreds of millions.
Meaningful also means regular, repeatable, justifiable and affordable — those $$ estimates are rarely any of those 4. Now, there are times when some dollar estimate is absolutely required - but I think in reality those times are few and far between.
Finally, I added that Action term. Adding a constant is a tradition in the creation of static equations to try to model very dynamic universes — see Einstein's cosmological constant. The Action term is there to allow you to rapidly move a risk up or down the ranking stack by adding/subtracting pre-defined values.
- When some regulatory body puts out an alert on a vulnerability or threat you scored low, or there is some press hype, and the CEO/Chief Legal counsel starts asking about it. ++
- One of your peers gets attacked (thanks, Target!) and you want to jump on that hype to justify some business disruption to clean up a bunch of medium level risks. ++
- You know that some near term future action, like a data center upgrade or coming merger/acquisition, is going to take care of this risk and you want to move it down the stack. —
- You know that some near term future action, like a data center upgrade or coming merger/acquisition, is going to make this risk a bajillion times worse, and you want to move it up the stack. ++
There's no such thing as the perfect risk equation but, just as in most areas of security, the elegant is the enemy of the useful.