Last Chance: MacBook Air, Dell XPS 13 or $600 off with SANS Online Training Ends December 7

SANS Security Trend Line

Ramblings on Risk Part I

I recently gave a webinar talk on Security Analytics that included a simplified risk equation I've been showing for years:

Risk = (Threat * Vulnerability) + Action

I'll explain that more in a bit. After the webinar I got some Twitter feedback that it was better to stick with the more historical risk equation:

Risk - Probability of event * Cost/impact of event

For over a decade at Gartner (and actually at TIS before that) I've railed against that equation - when applied to information security, I've rarely if ever actually seen it provide value. In fact, that approach was gospel for federal government systems until it was abandoned when OMB A130 guidance was updated in the mid-1990s.

To me, the value of any risk estimation exercise is to product a meaningful way to prioritize action - what risks should I address first and I keep working my way down the list until I run out of resources. That doesn't match what textbooks describe as Real ™ Risk Management, mainly because they tend to be trying to apply techniques that work in financial risk management - where the probability of loan defaults, currency fluctuations, etc. and the cost impact of those events are much more measurable, and have history behind them. Most importantly: the financial events modeled have well established time profiles where probability and impact generally decline over time.

Applying that risk equation outside of straightforward financial cases works somewhat in manufacturing systems and other areas where the events are outages and the cost of downtime is well known. But, almost universally that equations fails for IT systems and applications, and becomes a time consuming exercise that results in producing a very small imaginary number (probability of an event) and multiplying it by a very large imaginary number (cost/impact of that event) and coming up with a medium sized imaginary number for the predicted risk impact.

Now, there is nothing wrong with that if those medium-sized imaginary numbers lend themselves to comparison to each other, so that the higher values really represent the areas where action should be taken most quickly and if the resources expended coming up with the risk estimation don't consume too high a percentage of the resources available to actually take action.

In most cases the P(event) * $(impact) approach fails on both of those factors. That is mostly because the value of information assets and the real cost of information security events is not as well defined (or in most cases even defined at all) as it is in the financial world. Those issues cause the estimated risks to vary widely and largely randomly, and the effort expended to make any meaningful estimate of those two parameters to be very high - so most estimates are not very meaningful.

That's not to say that P(event) * $(impact) never results in meaningful risk ranking - when the skilled resources are expended to do it right, it can. But, most times that doesn't happen - and invariably it doesn't/can't happen often enough. I've seen a lot of enterprise spending on these risk exercises (both internal and bringing in consultants) have meaningful results that very totally meaningless six months later - and no business can afford that level of effort on anything regular, let alone continuous.

The very important goal of this old approach was to focus on impact - that was one of the criticisms of my simplified equation. That's a valid comment that I'll address in my next post.

 

4 Comments

Posted January 27, 2014 at 10:17 PM | Permalink | Reply

Sarah Clarke

Can't wait for part 2 John. This strikes all kinds of cords with me. Immovable risks on IT risk profiles. Impossible comparisons to prioritise remediation. Never ending attempts to defend movement towards a tolerable risk position or retain budget to finish the job as the starting point its self can't be defended. Credibility lost and resulting exec apathy until the next incident. Great stuff, keep it coming.

Posted January 28, 2014 at 11:58 AM | Permalink | Reply

John Pescatore

Thanks, Sarah ''" I'm going to try to put out part 2 this week.

Posted February 6, 2014 at 7:06 AM | Permalink | Reply

James W. De Rienzo

Risk Profile Matrix shares traits with "The Battleship Game," played on an 88 grid. Individual squares in the grid are identified by letter and number. With Risk Profile Matrix, Y-axis represents the LIKELIHOOD coordinate, and X-axis represents IMPACT coordinate. The product of the paired coordinates = Risk Rating. Project Managers associate risk rating to management decisions. Risk Managers associate risk rating to computer assets and threat actors (environmental, internal, external). Risk Tolerance is expressed as a dotted line dividing Risk Matrix between ACCEPTABLE and UNACCEPTABLE RISK.
Risk tolerance can also be expressed in terms of color, i.e., Low=Green, Moderate=Yellow, High=Red. Each color represents a range of Risk Rating coordinates (1,2,3=Low, 4,5,6=Moderate, 7,8,9=High). The maximum allowed Risk Rating represents your maximum Risk Tolerance value, i.e., 4=Maximum Tolerance. In Risk Management, you apply countermeasures based on a risk decision to reduce the risk rating associated with a valuable asset. Countermeasures mitigate vulnerabilities associated with valuable assets by reducing the likelihood of a threat consequence.

Posted February 6, 2014 at 11:43 AM | Permalink | Reply

John Pescatore

I've seen this matrix in risk management textbooks and courses, not so much in practice.
It is really just the old Probability of Event * Impact of Event put in matrix form. the dotted line is nothing but a threshold of that equation. So, it suffers from all the problems I mentioned in part I.

Post a Comment






Captcha


* Indicates a required field.