SANS 2017 SOC Survey is NOW OPEN - It takes a village to protect today's networks from cyber threats. Tell us how your organization is accomplishing these tasks and enter to win a $400 Amazon gift card! https://www.surveymonkey.com/r/2017SANSSOCSurvey
SANS 2017 Threat Hunting Survey - Is threat hunting proactive, reactive or both? Tell us in this SANS survey and enter to win a $400 Amazon Gift Card: https://www.surveymonkey.com/r/2017SANSThreatHuntingSurvey
More than 75,000 unique visitors read papers in the Reading Room every month and it has become the starting point for exploration of topics ranging from SCADA to wireless security, from firewalls to intrusion detection. The SANS Reading Room features over 2,620 original computer security white papers in 101 different categories.
Latest 25 Papers Added to the Reading Room
Minimizing Legal Risk When Using Cybersecurity Scanning Tools STI Graduate Student Research
by John Dittmer - January 19, 2017 in Legal Issues
When cybersecurity professionals use scanning tools on the networks and devices of organizations, there can be legal risks that need to be managed by individuals and enterprises. Often, scanning tools are used to measure compliance with cybersecurity policies and laws, so they must be used with due care. There are protocols that should be followed to ensure proper use of the scanning tools to prevent interference with normal network or system operations and to ensure the accuracy of the scanning results. Several challenges will be examined in depth, such as, measuring for scanner accuracy, proper methods of obtaining written consent for scanning, and how to set up a scanning session for optimum examination of systems or networks. This paper will provide cybersecurity professionals and managers with a better understanding of how and when to use the scanning tools while minimizing the legal risk to themselves and their enterprises.
Packets Don't Lie: LogRythm NetMon Freemium Review Analyst Paper
by Dave Shackleford - January 18, 2017 in Intrusion Detection, Data Loss Prevention
- Associated Webcasts: Packets Donít Lie: Whatís Really Happening on Your Network?
- Sponsored By: LogRhythm
With more traffic than ever passing through our environments, and adversaries who know how to blend in, network security analysts need all the help they can get. At the same time, data is leaking out of our environments right under our noses. This paper investigates how LogRhythmís Network Monitor Freemium (NetMon Freemium) Version 3.2.3 provides intelligent monitoring, and helps organizations to identify sensitive data leaving the network and to respond when loss occurs.
Leveraging the Asset Inventory Database STI Graduate Student Research
by Timothy Straightiff - January 4, 2017 in Critical Controls
A well maintained Asset Inventory Database can aid in building a more comprehensive security program based on the CIS Critical Security Controls (CSC). Adding inputs and outputs to the database workflow will help the organization with several of the Critical Security Controls. The Critical Security Controls define a list of prioritized controls that, when followed, can improve the security foundation of an organization. The controls are most effective when implemented in order. Keeping an integrated and well maintained Asset Inventory Database with the proper inputs and outputs can serve as a foundational element in any comprehensive security program.
Data Breach Impact Estimation STI Graduate Student Research
by Paul Hershberger - January 3, 2017 in Data Protection, Data Loss Prevention
Internal and External auditors spend a significant amount of time planning their audit processes to align their efforts with the needs of the audited organization. The initial phase of that audit cycle is the risk assessment. Establishing a firm understanding of the likelihood and impact of risk guides the audit function and aligns its work with the risks the organization faces. The challenge many auditors and security professionals face is effectively quantifying the potential impact of a data breach to their organization. This paper compares the data breach cost research of the Ponemon Institute and the RAND Corporation, comparing the models against breach costs reported by publicly traded companies by the Securities and Exchange Commission (SEC) reporting requirements. The comparisons will show that the RAND Corporation's approach provides organizations with a more accurate and flexible model to estimate the potential cost of data breaches as they relate to the direct cost of investigating and remediating a breach and the indirect financial impact associated with regulatory and legal action of a data breach. Additionally, the comparison indicates that data breach-related impacts to revenue and stock valuation are only realized in the short-term.
Is Anyone Out There? Monitoring DNS for Misuse STI Graduate Student Research
by Kaleb Fornero - December 30, 2016 in DNS Issues
In the early 1980ís, a system was born by which millions of users would unlock the untold amounts of computer information located around the world. The creation of the Domain Name System (DNS) not only allowed for the traversal of the Internet with userfriendly URLs, but also created a means of misuse, a means of deception. This paper will outline the way in which DNS may be abused for command and control channels as well as data exfiltration by deconstructing deceptive packets and outlining the anomalies within them. With this analytical information, the development of active network monitoring rules will be provided to detect these irregularities and identify DNS exploitation.
Real-World Case Study: The Overloaded Security Professional's Guide to Prioritizing Critical Security Controls STI Graduate Student Research
by Phillip Bosco - December 27, 2016 in Critical Controls
Using a real-world case study of a recently compromised company as a framework, we will step inside the aftermath of an actual breach and determine how the practical implementation of Critical Security Controls (CSC) may have prevented the compromise entirely while providing greater visibility inside the attack as it occurred. The breached company's information security "team" consisted of a single over-worked individual, who found it arduous to identify which critical controls he should focus his limited time implementing. Lastly, we will delve into real-world examples, using previously unpublished research, that serve as practical approaches for teams with limited resources to prioritize and schedule which CSCs will provide the largest impact towards reducing the company's overall risk. Ideally, the observations and approaches identified in this research paper will assist security professionals who may be in similar circumstances.
Legal Considerations When Creating an Incident Response Plan STI Graduate Student Research
by Bryan Chou - December 22, 2016 in Legal Issues
Creating a cybersecurity incident response plan (CSIRP) is basic requirements of any security program. CSIRPs generally follow the six phases of the incident response process (preparation, identification, containment, eradication, recovery, and lessons learned) or some derivation of those steps (Kral, 2011). Once a security event begins, the cybersecurity incident response team (CSIRT) is focused on identification, containment, eradication, and recovery.. In other words, they are trying to get operations back to normal. The preparation phase is the time to thoughtfully consider and research the legal decisions required during a security event. Legal considerations to include in the CSIRP include the pertinent laws and regulations, what to do if prosecution is a possibility, and maintaining attorney-client privilege.
Finding Bad with Splunk STI Graduate Student Research
by David Brown - December 16, 2016 in Critical Controls
There is such a deluge of information that it can be hard for information security teams to know where to focus their time and energy. This paper will recommend common Linux and Windows tools to scan networks and systems, store results to local filesystems, analyze results, and pass any new data to Splunk. Splunk will then help security teams narrow in on what has changed within the networks and systems by alerting the security teams to any differences between old baselines and new scans. In addition, security teams may not even be paying attention to controls, like whitelisting blocks, that successfully prevent malicious activities. Monitoring failed application execution attempts can give security teams and administrators early warnings that someone may be trying to subvert a system. This paper will guide the security professional on setting up alerts to detect security events of interest like failed application executions due to whitelisting. To solve these problems, the paper will discuss the first five Critical Security Controls and explain what malicious behaviors can be uncovered as a result of alerting. As the paper progresses through the controls, the security professional is shown how to set up baseline analysis, how to configure the systems to pass the proper data to Splunk, and how to configure Splunk to alert on events of interest. The paper does not revolve around how to implement technical controls like whitelisting, but rather how to effectively monitor the controls once they have been implemented.
Continuous Monitoring: Build A World Class Monitoring System for Enterprise, Small Office, or Home STI Graduate Student Research
by Austin Taylor - December 15, 2016 in Critical Controls, Intrusion Detection
For organizations who wish to prevent data breaches, incident prevention is ideal, but detection of an attempted or successful breach is a must. This paper outlines guidance for network visibility, threat intelligence implementation and methods to reduce analyst alert fatigue. Additionally, this document includes a workflow for Security Operations Centers (SOC) to efficiently process events of interest thereby increasing the likelihood of detecting a breach. Methods include Intrusion Detection System (IDS) setup with tips on efficient data collection, sensor placement, identification of critical infrastructure along with network and metric visualization. These recommendations are useful for enterprises, small homes, or offices who wish to implement threat intelligence and network analysis.
Detecting Malicious SMB Activity Using Bro STI Graduate Student Research
by Richie Cyrus - December 13, 2016 in Intrusion Detection
Attackers utilize the Server Message Block (SMB) protocol to blend in with network activity, often carrying out their objectives undetected. Post-compromise, attackers use file shares to move laterally, looking for sensitive or confidential data to exfiltrate out a network. Traditional methods for detecting such activity call for storing and analyzing large volumes of Windows event logs, or deploying a signature-based intrusion detection solution. For some organizations, processing and storing large amounts of Windows events may not be feasible. Pattern based intrusion detection solutions can be bypassed by malicious entities, potentially failing to detect malicious activity. Bro Network Security Monitor (Bro) provides an alternative solution allowing for rapid detection through custom scripts and log data. This paper introduces methods to detect malicious SMB activity using Bro.
SANS 2016 Security Analytics Survey Analyst Paper
by Dave Shackleford - December 6, 2016 in Security Analytics and Intelligence
- Associated Webcasts: Security Analytics in Action: SANS Fourth Annual Security Analytics Survey - Part 1 Part 2 | SANS Security Analytics Survey Results: What\'s Working? What\'s Not?
- Sponsored By: LogRhythm Rapid7 Inc. AlienVault Lookingglass Cyber Solutions, Inc. Anomali
Survey respondents have become more aware of the value of analytics and have moved beyond using them simply for detection and response to using them to measure and aid in improving their overall risk posture. Still, weíve got a long way to go before analytics truly progresses in many security organizations. Read on to learn more.
Active Defense via a Labyrinth of Deception STI Graduate Student Research
by Nathaniel Quist - December 5, 2016 in Active Defense
A network baseline allows for the identification of malicious activity in real time. However, a baseline requires that every listed action is known and accounted, presenting a nearly impossible task in any production environment due to an ever-changing application footprint, system and application updates, changing project requirements, and not least of all, unpredictable user behaviors. Each obstacle presents a significant challenge in the development and maintenance of an accurate and false positive free network baseline. To surmount these hurdles, network architects need to design a network free from continuous change including, changing company requirements, untested systems or application updates, and the presence of unpredictable users. Creating a static, never-changing environment is the goal. However, this completely removes the functionality of a production network. Or does it? Within this paper, I will detail how this type of static environment, referred to as the Labyrinth, can be placed in front of a production environment and provide real time defensive measures against hostile and dispersed attacks, from both human actors and automated machines. I expect to prove the Labyrinth is capable of detecting changes in its environment in real time. It will provide a listing of dynamic defensive capabilities like identifying attacking IP addresses, rogue-process start commands, modifications to registry values, alterations in system memory and recording the movements of an attacker's tactics, techniques, and procedures. At the same time, the Labyrinth will add these values to block list, protecting the production network lying behind. Successful accomplishment of these goals will prove the viability and sustainability of a Labyrinth defending network (Revelle, 2011) environments.
Next Generation of Privacy in Europe and the Impact on Information Security: Complying with the GDPR STI Graduate Student Research
by Edward Yuwono - December 5, 2016 in Legal Issues
Human rights have a strong place within Europe, part of this includes the fundamental right to privacy. Over the years, individual privacy has strengthened through various European directives. With the evolution of privacy continuing in Europe through the release of the General Data Protection Regulation (GDPR), how will the latest iteration of European Union (EU) regulation affect organisations and what will information security leaders need to do to meet this change? This paper will explore the evolution of privacy in Europe, the objectives and changes this iteration of EU privacy regulation will provide, what challenges organisations will experience, and how information security could be leveraged to satisfy the regulation.
A Black-Box Approach to Embedded Systems Vulnerability Assessment STI Graduate Student Research
by Michael Horkan - December 5, 2016 in Security Basics, Risk Management
Vulnerability assessment of embedded systems is becoming more important due to security needs of the ICS/SCADA environment as well as the emergence of the Internet of Things (IoT). Often, these assessments are left to test engineers without intimate knowledge of the device's design, no access to firmware source or tools to debug the device while testing. This gold paper will describe a test lab black-box approach to evaluating an embedded device's security profile and possible vulnerabilities. Open-source tools such as Burp Suite and python scripts based on the Sulley Fuzzing Framework will be employed and described. The health status of the device under test will be monitored remotely over a network connection. I include a discussion of an IoT test platform, implemented for Raspberry Pi, and how to approach the evaluation of IoT using this device as an example.
Insider Threats and the Need for Fast and Directed Response Analyst Paper
by Dr. Eric Cole - December 1, 2016 in Threats/Vulnerabilities
- Associated Webcasts: Insider Threats and the Real Financial Impact to Organizations - A SANS Survey
- Sponsored By: Veriato
As breaches continue to cause significant damage to organizations, security consciousness is shifting from traditional perimeter defense to a holistic understanding of what is causing the damage and where organizations are exposed. Although many attacks are from an external source, attacks from within often cause the most damage. This report looks at how and why insider attacks occur and their implications.
Node Router Sensors: What just happened? by Kim Cary - November 22, 2016 in Incident Handling, Logging Technology and Techniques, System Administration
When an airliner crashes, one of the most important tasks is the recovery of the flight recorder or black box. This device gives precise & objective information about what happened and when before the crash. When an information security incident occurs on a network, it is equally important to have access to precise information about what happened to the victim machine and what it did after any compromise. A network of devices can be designed, economically constructed and managed to automatically capture and make available this type of data to information security incident handlers. In any environment, this complete record of network data comes with legal and ethical concerns regarding its use. Proper technical, legal and ethical operation must be baked into the design and operational procedures for devices that capture information on any network. These considerations are particularly necessary on a college campus, where such operations are subject to public discussion. This paper details the benefits, designs, operational procedures and controls and sample results of the use of "Node Router Sensors" in solving information security incidents on a busy college network.
A Checklist for Audit of Docker Containers STI Graduate Student Research
by Alyssa Robinson - November 22, 2016 in Auditing & Assessment
Docker and other container technologies are increasingly popular methods for deploying applications in DevOps environments, due to advantages in portability, efficiency in resource sharing and speed of deployment. The very properties that make Docker containers useful, however, can pose challenges for audit, and the security capabilities and best practices are changing rapidly. As adoption of this technology grows, it is, therefore, necessary to create a standardized checklist for audit of Dockerized environments based on the latest tools and recommendations.
Security Assurance of Docker Containers STI Graduate Student Research
by Stefan Winkle - November 22, 2016 in Information Assurance, Cloud Computing, System Administration
With recent movements like DevOps and the conversion towards application security as a service, the IT industry is in the middle of a set of substantial changes with how software is developed and deployed. In the infrastructure space, we see the uptake of lightweight container technology, while application technologies are moving towards distributed micros services. There is a recent explosion in popularity of package managers and distributors like OneGet, NPM, RubyGems and PyPI. More and more software development becomes dependent on small, reusable components developed by many different developers and often distributed by infrastructures outside our control. In the midst of this all, we often find application containers like Docker, LXC, and Rocket to compartmentalize software components. The Notary project, recently introduced in Docker, is built upon the assumption the software distribution pipeline can no longer be trusted. Notary attempts to protect against attacks on the software distribution pipeline by association of trust and duty separation to Docker containers. In this paper, we explore the Notary service and take a look at security testing of Docker containers.
BGP Hijinks and Hijacks - Incident Response When Your Backbone Is Your Enemy STI Graduate Student Research
by Tim Collyer - November 21, 2016 in Incident Handling
The Border Gateway Protocol (BGP) is used to route packets across the Internet, usually at the level of the Internet backbone where Internet Service Providers (ISPs) pass traffic amongst themselves. Unfortunately, BGP was not designed with security in mind, like many of the protocols used in modern networks such as the Internet. Lack of security within BGP means that traffic is susceptible to misdirection and manipulation through either misconfiguration or malicious intent. Among the traffic manipulation possible within BGP routing is Autonomous System (AS) path injection, in which a new router can insert itself into the routing path of traffic. This can create a man-in-the-middle condition if the path injection is malicious in nature. Differentiation between a malicious incident and mere misconfiguration can be extremely challenging. Even more difficult for an affected company is to conduct incident response during a BGP-related incident. This paper explores the incident response options currently available to security teams to prevent, detect, and where possible, respond should a BGP incident arise.
Reducing Attack Surface: SANSí Second Survey on Continuous Monitoring Programs Analyst Paper
by Barbara Filkins - November 14, 2016 in Critical Controls, Management & Leadership
- Associated Webcasts: Vulnerabilities, Controls and Continuous Monitoring: The SANS 2016 Continuous Monitoring Survey
- Sponsored By: ForeScout Technologies Qualys IBM RiskIQ
Continuous monitoring is not a single activity. Rather, it is a set of activities, tools and processes (asset and configuration management, host and network inventories, and continuous vulnerability scanning) that must be integrated and automated all the way down to the remediation workflow. Although CM is shifting focus and slowly improving, it still has a way to go to attain the maturity needed to become a critical part of an organizationís business strategy.
Auditing Windows installed software through command line scripts STI Graduate Student Research
by Jonathan Risto - November 14, 2016 in Auditing & Assessment, Critical Controls
The 20 Critical Controls provides guidance on managing and securing our networks. The second control states there should be a software inventory of the products for all devices within the infrastructure. Within this paper, the auditor will be enabled to compare Windows system baseline information against the currently installed software configuration. Command line tools utilized will be discussed and scripts provided to simplify and automate these tasks.
Network Inspection of Duplicate Packets STI Graduate Student Research
by Randy Devlin - November 11, 2016 in Intrusion Detection, Intrusion Prevention, IPS
Network Intrusion Analysis enables a security analyst to review network traffic for protocol conformity and anomalous behavior. The analystís goal is to detect network intrusion activity in near-real time. The detection provides details as to who the attackers are, the attack type, and potential remediation responses. Is it possible that a network security stack could render the analyst ďblindĒ to detecting intrusions? This paper will review architecture, traffic flow, and inspection processes. Architecture review validates proper sensor placement for inspection. Traffic flow analyzes sources and destinations, approved applications, and known traffic patterns. Inspection process evaluates protocols and packet specific details. The combination of these activities can reveal scenarios that potentially result in limitations of network security inspection and analysis.
Forcepoint Review: Effective Measure of Defense Analyst Paper
by Eric Cole, PhD - November 9, 2016 in Intrusion Detection, Firewalls & Perimeter Protection, Intrusion Prevention
Effective security is all about the quality of the solution, not the quantity of products. Indeed, buying more products can make the problem worse. All of the major breaches over the last several years have had one thing in common: Multiple products were issuing alerts, but there were too many alerts and not enough people charged with monitoring and responding to them. When that is the case, putting more products in place spreads current resources even thinner--the problem gets worse, not better. This paper explains the advantages of an integrated defense-in-depth approach to security and looks at how Forcepoint's integrated solution suite meets the needs of such an approach.
The Age of Encryption STI Graduate Student Research
by Wes Whitteker - November 7, 2016 in Encryption & VPNs
Over the last few years, there has been an increasing movement toward encrypting Internet communication. Though this movement increases the confidentiality of transmitted information, it also severely limits the ability of security tools to analyze Internet traffic for malicious content. This paper investigates the growth of encrypted Internet traffic (i.e. HTTPS) and its impact on Cybersecurity. This paper also proposes an open source solution for decrypting and inspecting Internet traffic accommodating IPv4 and v6 for both home and small-to-medium sized business (SMB) use.
Implementing Full Packet Capture STI Graduate Student Research
by Matt Koch - November 7, 2016 in Forensics
Full Packet Capture (FPC) provides a network defender an after-the-fact investigative capability that other security tools cannot provide. Uses include capturing malware samples, network exploits and determining if data exfiltration has occurred. Full packet captures are a valuable troubleshooting tool for operations and security teams alike. Successful implementation requires an understanding of organization-specific requirements, capacity planning, and delivery of unaltered network traffic to the packet capture system.
All papers are copyrighted. No re-posting or distribution of papers is permitted.