Most of the computer security white papers in the Reading Room have been written by students seeking GIAC certification to fulfill part of their certification requirements and are provided by SANS as a resource to benefit the security community at large. SANS attempts to ensure the accuracy of information, but papers are published "as is". Errors or inconsistencies may exist or may be introduced over time as material becomes dated. If you suspect a serious error, please contact email@example.com.
According to a September 2011 survey, 63% respondents indicated “that employee use of social media puts their organization’s security at risk" while 29% "say they have the necessary security controls in place to mitigate or reduce the risk" (Ponemon Institute, 2011).
This paper presents and extends the major statistical methods used in risk measurement and audit, and extends into other processes that are used within systems engineering (Elliott, Jeanblanc, & Yor, 2000).
Absolute security does not exist and nor can it be achieved. The statement that a computer is either secure or not is logically falsifiable (Peisert & Bishop, 2007), all systems exhibit a level of insecurity.
“Perceived Control” is a core construct used in the psychology field that can be considered an aspect of empowerment (Eklund, & Backstrom, 2006). Effectively, it is a measure of how much control people feel that they have, as opposed to the amount of “Actual Control” that they may have. It is often paired against constructs such as “Vicarious Control” and “Vicarious Perceived Control”, which measure the amount of control that outside entities have over the subject. Often, these are variables measured in the psychology/health field. For example, in the world of medicine, when patients report a lack of perceived control over controllable illnesses such as diabetes (Helgeson, & Franzen, 1997), breast cancer (Helgeson, 1992) and heart disease (Helgeson, 1992), they often do more poorly than patients who feel that they have a greater sense of control over their illness. There is also evidence that students with high perceived control do substantially better academically than those with low, though this seems to also link with emotions surrounding the tasks at hand (Ruthig, Perry, Hladkyj, Hall, & Pekrun, 2008). In short, people who are interested in and excited by what they are doing tend to perform better.
Software patching for IT Departments across the organizational landscape has always been an integral part of maintaining functional, usable and stable software. Historically the traditional patch cycle has been focused on fixing or resolving issues which affect functionality. In recent years, with the advancement of more sophisticated and targeted threats which are occurring in quicker cycles, this focus is dramatically changing. (Risk Assessment – Cisco, n.d.; Executive Office of The United States, 2005) . Corporations and Government now have a greater understanding of potential losses and expenses incurred by not maintaining application security and are moving towards an increased focus on patching and security (Epstein, Grow & Tschang, 2008). With organizations’ reputations, consumer confidence and corporate secrets at risk, corporations and government are recognizing the need to shift and address vulnerabilities at a much faster pace than they historically have done so (Chan, 2004). Over roughly the last ten years, the length of time between the documentation of a given vulnerability in a piece of software and the development of an actual exploit that can take advantage of the weakness in the application, has decreased tremendously. According to Andrew Jaquith, senior analyst at Yankee Group, the average time between vulnerability discovery and the release of exploit code is less than one week. (“Shrinking time from,” 2006). It has also been identified that “99% of intrusions result from exploitation of known vulnerabilities or configuration errors where countermeasures were available” ("Risk reduction and.," 2010) . Clearly these statistics alone can prove daunting for many businesses trying to keep pace and maintain proper defenses against the bad guys.
The fallacy of the black swan in risk has come full circle in information systems. Just as the deductive fallacy, “a dicto secundum quid ad dictum simpliciter2” allowed false assertions that black swans could not exist when they do, we see assertions that risk cannot be modeled without knowing all of the „black swans‟ that can exist. The falsity of the black swan argument derives from a deductive statement that “every swan I have seen is white, so it must be true that all swans are white”. The problem is that which one has seen is a subset of the entire set. One cannot have seen all swans.