I recently gave a webinar talk on Security Analytics that included a simplified risk equation I've been showing for years:
Risk = (Threat * Vulnerability) + Action
I'll explain that more in a bit. After the webinar I got some Twitter feedback that it was better to stick with the more historical risk equation:
Risk - Probability of event * Cost/impact of event
For over a decade at Gartner (and actually at TIS before that) I've railed against that equation - when applied to information security, I've rarely if ever actually seen it provide value. In fact, that approach was gospel for federal government systems until it was abandoned when OMB A130 guidance was updated in the mid-1990s.
To me, the value of any risk estimation exercise is to product a meaningful way to prioritize action - what risks should I address first and I keep working my way down the list until I run out of resources. That doesn't match what textbooks describe as Real ™ Risk Management, mainly because they tend to be trying to apply techniques that work in financial risk management - where the probability of loan defaults, currency fluctuations, etc. and the cost impact of those events are much more measurable, and have history behind them. Most importantly: the financial events modeled have well established time profiles where probability and impact generally decline over time.
Applying that risk equation outside of straightforward financial cases works somewhat in manufacturing systems and other areas where the events are outages and the cost of downtime is well known. But, almost universally that equations fails for IT systems and applications, and becomes a time consuming exercise that results in producing a very small imaginary number (probability of an event) and multiplying it by a very large imaginary number (cost/impact of that event) and coming up with a medium sized imaginary number for the predicted risk impact.
Now, there is nothing wrong with that if those medium-sized imaginary numbers lend themselves to comparison to each other, so that the higher values really represent the areas where action should be taken most quickly and if the resources expended coming up with the risk estimation don't consume too high a percentage of the resources available to actually take action.
In most cases the P(event) * $(impact) approach fails on both of those factors. That is mostly because the value of information assets and the real cost of information security events is not as well defined (or in most cases even defined at all) as it is in the financial world. Those issues cause the estimated risks to vary widely and largely randomly, and the effort expended to make any meaningful estimate of those two parameters to be very high - so most estimates are not very meaningful.
That's not to say that P(event) * $(impact) never results in meaningful risk ranking - when the skilled resources are expended to do it right, it can. But, most times that doesn't happen - and invariably it doesn't/can't happen often enough. I've seen a lot of enterprise spending on these risk exercises (both internal and bringing in consultants) have meaningful results that very totally meaningless six months later - and no business can afford that level of effort on anything regular, let alone continuous.
The very important goal of this old approach was to focus on impact - that was one of the criticisms of my simplified equation. That's a valid comment that I'll address in my next post.